<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://147.102.106.44/rs/wiki/skins/common/feed.css?270"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="el">
		<id>http://147.102.106.44/rs/wiki/index.php?feed=atom&amp;target=Elenikaroutsos&amp;title=%CE%95%CE%B9%CE%B4%CE%B9%CE%BA%CF%8C%3A%CE%A3%CF%85%CE%BD%CE%B5%CE%B9%CF%83%CF%86%CE%BF%CF%81%CE%AD%CF%82</id>
		<title>RemoteSensing Wiki - Συνεισφορές χρήστη [el]</title>
		<link rel="self" type="application/atom+xml" href="http://147.102.106.44/rs/wiki/index.php?feed=atom&amp;target=Elenikaroutsos&amp;title=%CE%95%CE%B9%CE%B4%CE%B9%CE%BA%CF%8C%3A%CE%A3%CF%85%CE%BD%CE%B5%CE%B9%CF%83%CF%86%CE%BF%CF%81%CE%AD%CF%82"/>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/%CE%95%CE%B9%CE%B4%CE%B9%CE%BA%CF%8C:%CE%A3%CF%85%CE%BD%CE%B5%CE%B9%CF%83%CF%86%CE%BF%CF%81%CE%AD%CF%82/Elenikaroutsos"/>
		<updated>2026-04-03T23:31:57Z</updated>
		<subtitle>Από RemoteSensing Wiki</subtitle>
		<generator>MediaWiki 1.16.2</generator>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Application_of_remote_sensing_for_critical_damage_assessment_and_strong_motion_analysis_in_the_Baghjan_oil_blowout_disaster,_Tinsukia,_Assam</id>
		<title>Application of remote sensing for critical damage assessment and strong motion analysis in the Baghjan oil blowout disaster, Tinsukia, Assam</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Application_of_remote_sensing_for_critical_damage_assessment_and_strong_motion_analysis_in_the_Baghjan_oil_blowout_disaster,_Tinsukia,_Assam"/>
				<updated>2026-01-13T12:52:39Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Application of remote sensing for critical damage assessment and strong motion analysis in the Baghjan oil blowout disaster, Tinsukia, Assam&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: SANGEETA SHARMA , BHAGYA PRATIM TALUKDAR, SAURABH BARUAH, ASHIM GOGOI, UMESH KALITA and KAPIL MALLIK''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s12040-025-02705-z &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 2 January 2026''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: BLOWOUT.png | thumb | '''Fig 1.'''1Land use and land cover map in an around blowout site (BGN-5) for the time periods: (A) 26/04/2020 (pre-blowout) (B) 29/06/2020 (post-blowout)(C) 29/12/2020 (six months post-blowout) (D) 29/04/2021 (E) 13/10/2021.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: BLOWOUT2.png | thumb | '''Fig 2.'''Normalized differential vegetation index (NDVI) map for the time periods (A) 26/04/2020 (pre-blowout) (B) 29/06/2020 (post-blowout) (C) 29/12/2020 (six months post-blowout) (D) 29/04/2021 and (E) 13/10/2021.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2020 there was a blowout disaster in the village of Baghjan in India. This blowout led to the uncontrolled released of crude oil and natural gas into the village and surrounding ecosystems leading to a massive fire. Since then, analysis has shown high levels of hydrocarbon contamination, especially by polycyclic aromatic hydrocarbons (PAH) which lead to bioaccumulation. Additionally, locals have demonstrated adverse health effects such as respitory and gastrointestinal symptoms, likely due to exposure from VOCs and particulate matter. This paper conducted a remote sensing analysis using Landsat imagery and GIS in order to determine and quantify the spatial and temporal damage. Major findings include a significant increase in contaminated water areas, barren land, a decline in vegetation health, and elevated temperatures. However fortunately, strong motion analysis indicated very little structural risk to the land, meaning there is little risk of infrastructure collapsing from ground sinks or earthquakes. &amp;lt;br/&amp;gt;	&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The study area was 78.5 km^2 with an emphasis on the area that is a 5 km radius around the oil well. Prior to the disaster the area had high levels of vegetation, biodiversity, and dense forest. Landsat 8 imagery was used to assess the environmental damage using two instruments, Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). These have a spatial resolution of 30m and 100m, respectively. The images were processed using ArcGIS and Google Earth Engine. Five time periods were analyzed correlating to before the blowout, right after the blowout, post-control, and two recovery phases which are about a 1 and 1.5 years after the disaster. Land surface subsidence was evaluated with Sentinel 1A datasets from three dates, one before the blowout, one after blowout but before the fire, and one after the fire. &amp;lt;br/&amp;gt;&lt;br /&gt;
The impact of the blowout was assessed by creating a land use land cover (LULC) map by running an unsupervised maximum likelihood classification technique. This technique takes the pixel values and assesses which category they are most likely to belong to. Additionally, a normalized difference vegetation index (NDVI) was calculated to observe the impact on vegetation. The ground motion and seismic activity was monitored using a strong motion accelerograph (SMA). This study used recordings of 20-minute long ambient noises and an earthquake event that occurred on July 14th, 2020 to do the analysis through a horizontal-to-vertical spectral ratio (HVSR) technique. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 1 depicts the land use and land cover map for five different points in time at the study area. At the center, depicted by a dot, is the site of the blowout. Fig1a shows the land use map prior to the incident while the remainder is after. Barren land was originally only  0.83% but jumped to  14.33% after the blowout in June of 2020, likely due to the fire. Subsequently, this number decreased 11.03% by December, exhibiting some recovery but not nearly as low as it was pre-blowout. Vegetation decreased by 3% from April to June 2020. Furthermore, the quantity of filthy water, which refers to chemically contaminated and stagnant water) increased from 3.01% to 11.2% and then a decrease to 6.19%  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The NDVI results showed that vegetation was highly impacted by the blowout, as shown in figure 2. The green resembles low NDVI while the red is high NDVI, signaling higher vegetation. The large green line is the river so it makes sense that the portion had low NDVI values. Figure 2b which shows the NDVI immediately after the blowout, depicts a much larger area of green in the top left area and below the river. The green area below the river seems to correlate well with the accumulation of filthy water as shown in the land use map. Figure 2c shows an increase in NDVI (from -0.18 to 0.71 in the area by the oil well). The last two figures depict a stabilization in NDVI values. The land surface temperature map showed that in the months of December 2020 (pre-blowout), April 2021, and October 2021 the temperatures were 15–19, 26–31 and 23–28C, respectively. This shows an overall rise in temperatures following the incident. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Surface deformation was also analyzed in the study. The reason for this is because following the devastation, sub-surface water resources were used to manage the damages and fires. This excessive extraction of water has the potential to deform the surface so it was important to measure the impact of this as well. Interferograms were created for two time periods: May 23–June 4, 2020, and June 4–June 16, 2020, but they fortunately showed no particular shifting of the ground. The researchers accredit this to the fact that the area has sediments of with high hydraulic conductivity which is versatile to the extraction of subsurface water. Microearthquakes were recorded at the blowout site by the strong motion network within a distance of 1.5 to 3.0 km, indicated the blowout was responsible.&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This study used remote sensing tools to analyze the effects of the Baghjan oil blowout on the area. The land usage was found by conducting an unsupervised classification, while the vegetative health was found by calculating the NDVI. Furthermore, ground motion changes were found using a  strong motion accelerograph. The results indicate a significant increase in the vegetation cover, as well as an increase in the quantity of filthy water and barren land. Furthermore, ground temperature has increased but little structural risk was found following the hyper extraction of ground water. Overall, this paper highlights the effectiveness of remote sensing and GIS tools for the understanding and analysis in environmental assessments.&lt;br /&gt;
&lt;br /&gt;
[[category:Περιβαλλοντικές Επιπτώσεις]]&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:BLOWOUT2.png</id>
		<title>Αρχείο:BLOWOUT2.png</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:BLOWOUT2.png"/>
				<updated>2026-01-13T12:51:16Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:BLOWOUT.png</id>
		<title>Αρχείο:BLOWOUT.png</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:BLOWOUT.png"/>
				<updated>2026-01-13T12:50:55Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Application_of_remote_sensing_for_critical_damage_assessment_and_strong_motion_analysis_in_the_Baghjan_oil_blowout_disaster,_Tinsukia,_Assam</id>
		<title>Application of remote sensing for critical damage assessment and strong motion analysis in the Baghjan oil blowout disaster, Tinsukia, Assam</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Application_of_remote_sensing_for_critical_damage_assessment_and_strong_motion_analysis_in_the_Baghjan_oil_blowout_disaster,_Tinsukia,_Assam"/>
				<updated>2026-01-13T12:50:43Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Application of remote sensing for critical damage assessment and strong motion analysis in the Baghjan oil blowout disaster, Tinsukia, Assam&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: SANGEETA SHARMA , BHAGYA PRATIM TALUKDAR, SAURABH BARUAH, ASHIM GOGOI, UMESH KALITA and KAPIL MALLIK''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s12040-025-02705-z &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 2 January 2026''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: BLOWOUT.png | thumb | '''Fig 1.'''1Land use and land cover map in an around blowout site (BGN-5) for the time periods: (A) 26/04/2020 (pre-blowout) (B) 29/06/2020 (post-blowout)(C) 29/12/2020 (six months post-blowout) (D) 29/04/2021 (E) 13/10/2021.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: BLOWOUT2.png | thumb | '''Fig 2.'''Normalized differential vegetation index (NDVI) map for the time periods (A) 26/04/2020 (pre-blowout) (B) 29/06/2020 (post-blowout) (C) 29/12/2020 (six months post-blowout) (D) 29/04/2021 and (E) 13/10/2021.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2020 there was a blowout disaster in the village of Baghjan in India. This blowout led to the uncontrolled released of crude oil and natural gas into the village and surrounding ecosystems leading to a massive fire. Since then, analysis has shown high levels of hydrocarbon contamination, especially by polycyclic aromatic hydrocarbons (PAH) which lead to bioaccumulation. Additionally, locals have demonstrated adverse health effects such as respitory and gastrointestinal symptoms, likely due to exposure from VOCs and particulate matter. This paper conducted a remote sensing analysis using Landsat imagery and GIS in order to determine and quantify the spatial and temporal damage. Major findings include a significant increase in contaminated water areas, barren land, a decline in vegetation health, and elevated temperatures. However fortunately, strong motion analysis indicated very little structural risk to the land, meaning there is little risk of infrastructure collapsing from ground sinks or earthquakes. &amp;lt;br/&amp;gt;	&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The study area was 78.5 km^2 with an emphasis on the area that is a 5 km radius around the oil well. Prior to the disaster the area had high levels of vegetation, biodiversity, and dense forest. Landsat 8 imagery was used to assess the environmental damage using two instruments, Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). These have a spatial resolution of 30m and 100m, respectively. The images were processed using ArcGIS and Google Earth Engine. Five time periods were analyzed correlating to before the blowout, right after the blowout, post-control, and two recovery phases which are about a 1 and 1.5 years after the disaster. Land surface subsidence was evaluated with Sentinel 1A datasets from three dates, one before the blowout, one after blowout but before the fire, and one after the fire. &amp;lt;br/&amp;gt;&lt;br /&gt;
The impact of the blowout was assessed by creating a land use land cover (LULC) map by running an unsupervised maximum likelihood classification technique. This technique takes the pixel values and assesses which category they are most likely to belong to. Additionally, a normalized difference vegetation index (NDVI) was calculated to observe the impact on vegetation. The ground motion and seismic activity was monitored using a strong motion accelerograph (SMA). This study used recordings of 20-minute long ambient noises and an earthquake event that occurred on July 14th, 2020 to do the analysis through a horizontal-to-vertical spectral ratio (HVSR) technique. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 1 depicts the land use and land cover map for five different points in time at the study area. At the center, depicted by a dot, is the site of the blowout. Fig1a shows the land use map prior to the incident while the remainder is after. Barren land was originally only  0.83% but jumped to  14.33% after the blowout in June of 2020, likely due to the fire. Subsequently, this number decreased 11.03% by December, exhibiting some recovery but not nearly as low as it was pre-blowout. Vegetation decreased by 3% from April to June 2020. Furthermore, the quantity of filthy water, which refers to chemically contaminated and stagnant water) increased from 3.01% to 11.2% and then a decrease to 6.19%  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The NDVI results showed that vegetation was highly impacted by the blowout, as shown in figure 2. The green resembles low NDVI while the red is high NDVI, signaling higher vegetation. The large green line is the river so it makes sense that the portion had low NDVI values. Figure 2b which shows the NDVI immediately after the blowout, depicts a much larger area of green in the top left area and below the river. The green area below the river seems to correlate well with the accumulation of filthy water as shown in the land use map. Figure 2c shows an increase in NDVI (from -0.18 to 0.71 in the area by the oil well). The last two figures depict a stabilization in NDVI values. The land surface temperature map showed that in the months of December 2020 (pre-blowout), April 2021, and October 2021 the temperatures were 15–19, 26–31 and 23–28C, respectively. This shows an overall rise in temperatures following the incident. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Surface deformation was also analyzed in the study. The reason for this is because following the devastation, sub-surface water resources were used to manage the damages and fires. This excessive extraction of water has the potential to deform the surface so it was important to measure the impact of this as well. Interferograms were created for two time periods: May 23–June 4, 2020, and June 4–June 16, 2020, but they fortunately showed no particular shifting of the ground. The researchers accredit this to the fact that the area has sediments of with high hydraulic conductivity which is versatile to the extraction of subsurface water. Microearthquakes were recorded at the blowout site by the strong motion network within a distance of 1.5 to 3.0 km, indicated the blowout was responsible.&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This study used remote sensing tools to analyze the effects of the Baghjan oil blowout on the area. The land usage was found by conducting an unsupervised classification, while the vegetative health was found by calculating the NDVI. Furthermore, ground motion changes were found using a  strong motion accelerograph. The results indicate a significant increase in the vegetation cover, as well as an increase in the quantity of filthy water and barren land. Furthermore, ground temperature has increased but little structural risk was found following the hyper extraction of ground water. Overall, this paper highlights the effectiveness of remote sensing and GIS tools for the understanding and analysis in environmental assessments.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Application_of_remote_sensing_for_critical_damage_assessment_and_strong_motion_analysis_in_the_Baghjan_oil_blowout_disaster,_Tinsukia,_Assam</id>
		<title>Application of remote sensing for critical damage assessment and strong motion analysis in the Baghjan oil blowout disaster, Tinsukia, Assam</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Application_of_remote_sensing_for_critical_damage_assessment_and_strong_motion_analysis_in_the_Baghjan_oil_blowout_disaster,_Tinsukia,_Assam"/>
				<updated>2026-01-13T12:46:48Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Application of remote sensing for critical damage assessment and strong motion analysis in the Baghjan oil blowout disaster, Tinsukia, Assam&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: SANGEETA SHARMA , BHAGYA PRATIM TALUKDAR, SAURABH BARUAH, ASHIM GOGOI, UMESH KALITA and KAPIL MALLIK''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s12040-025-02705-z &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 2 January 2026''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2020 there was a blowout disaster in the village of Baghjan in India. This blowout led to the uncontrolled released of crude oil and natural gas into the village and surrounding ecosystems leading to a massive fire. Since then, analysis has shown high levels of hydrocarbon contamination, especially by polycyclic aromatic hydrocarbons (PAH) which lead to bioaccumulation. Additionally, locals have demonstrated adverse health effects such as respitory and gastrointestinal symptoms, likely due to exposure from VOCs and particulate matter. This paper conducted a remote sensing analysis using Landsat imagery and GIS in order to determine and quantify the spatial and temporal damage. Major findings include a significant increase in contaminated water areas, barren land, a decline in vegetation health, and elevated temperatures. However fortunately, strong motion analysis indicated very little structural risk to the land, meaning there is little risk of infrastructure collapsing from ground sinks or earthquakes. &amp;lt;br/&amp;gt;	&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The study area was 78.5 km^2 with an emphasis on the area that is a 5 km radius around the oil well. Prior to the disaster the area had high levels of vegetation, biodiversity, and dense forest. Landsat 8 imagery was used to assess the environmental damage using two instruments, Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). These have a spatial resolution of 30m and 100m, respectively. The images were processed using ArcGIS and Google Earth Engine. Five time periods were analyzed correlating to before the blowout, right after the blowout, post-control, and two recovery phases which are about a 1 and 1.5 years after the disaster. Land surface subsidence was evaluated with Sentinel 1A datasets from three dates, one before the blowout, one after blowout but before the fire, and one after the fire. &amp;lt;br/&amp;gt;&lt;br /&gt;
The impact of the blowout was assessed by creating a land use land cover (LULC) map by running an unsupervised maximum likelihood classification technique. This technique takes the pixel values and assesses which category they are most likely to belong to. Additionally, a normalized difference vegetation index (NDVI) was calculated to observe the impact on vegetation. The ground motion and seismic activity was monitored using a strong motion accelerograph (SMA). This study used recordings of 20-minute long ambient noises and an earthquake event that occurred on July 14th, 2020 to do the analysis through a horizontal-to-vertical spectral ratio (HVSR) technique. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 1 depicts the land use and land cover map for five different points in time at the study area. At the center, depicted by a dot, is the site of the blowout. Fig1a shows the land use map prior to the incident while the remainder is after. Barren land was originally only  0.83% but jumped to  14.33% after the blowout in June of 2020, likely due to the fire. Subsequently, this number decreased 11.03% by December, exhibiting some recovery but not nearly as low as it was pre-blowout. Vegetation decreased by 3% from April to June 2020. Furthermore, the quantity of filthy water, which refers to chemically contaminated and stagnant water) increased from 3.01% to 11.2% and then a decrease to 6.19%  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The NDVI results showed that vegetation was highly impacted by the blowout, as shown in figure 2. The green resembles low NDVI while the red is high NDVI, signaling higher vegetation. The large green line is the river so it makes sense that the portion had low NDVI values. Figure 2b which shows the NDVI immediately after the blowout, depicts a much larger area of green in the top left area and below the river. The green area below the river seems to correlate well with the accumulation of filthy water as shown in the land use map. Figure 2c shows an increase in NDVI (from -0.18 to 0.71 in the area by the oil well). The last two figures depict a stabilization in NDVI values. The land surface temperature map showed that in the months of December 2020 (pre-blowout), April 2021, and October 2021 the temperatures were 15–19, 26–31 and 23–28C, respectively. This shows an overall rise in temperatures following the incident. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Surface deformation was also analyzed in the study. The reason for this is because following the devastation, sub-surface water resources were used to manage the damages and fires. This excessive extraction of water has the potential to deform the surface so it was important to measure the impact of this as well. Interferograms were created for two time periods: May 23–June 4, 2020, and June 4–June 16, 2020, but they fortunately showed no particular shifting of the ground. The researchers accredit this to the fact that the area has sediments of with high hydraulic conductivity which is versatile to the extraction of subsurface water. Microearthquakes were recorded at the blowout site by the strong motion network within a distance of 1.5 to 3.0 km, indicated the blowout was responsible.&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This study used remote sensing tools to analyze the effects of the Baghjan oil blowout on the area. The land usage was found by conducting an unsupervised classification, while the vegetative health was found by calculating the NDVI. Furthermore, ground motion changes were found using a  strong motion accelerograph. The results indicate a significant increase in the vegetation cover, as well as an increase in the quantity of filthy water and barren land. Furthermore, ground temperature has increased but little structural risk was found following the hyper extraction of ground water. Overall, this paper highlights the effectiveness of remote sensing and GIS tools for the understanding and analysis in environmental assessments.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Application_of_remote_sensing_for_critical_damage_assessment_and_strong_motion_analysis_in_the_Baghjan_oil_blowout_disaster,_Tinsukia,_Assam</id>
		<title>Application of remote sensing for critical damage assessment and strong motion analysis in the Baghjan oil blowout disaster, Tinsukia, Assam</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Application_of_remote_sensing_for_critical_damage_assessment_and_strong_motion_analysis_in_the_Baghjan_oil_blowout_disaster,_Tinsukia,_Assam"/>
				<updated>2026-01-13T12:44:32Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: Νέα σελίδα με '''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Application of remote sensing for critical damage assessment and strong motion analysis in the Baghjan oil blowout disaste...'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Application of remote sensing for critical damage assessment and strong motion analysis in the Baghjan oil blowout disaster, Tinsukia, Assam&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: SANGEETA SHARMA , BHAGYA PRATIM TALUKDAR, SAURABH BARUAH, ASHIM GOGOI, UMESH KALITA and KAPIL MALLIK''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s12040-025-02705-z &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 2 January 2026''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/%CE%9A%CE%B1%CF%81%CE%BF%CF%8D%CF%84%CF%83%CE%BF%CF%85_%CE%95%CE%BB%CE%AD%CE%BD%CE%B7</id>
		<title>Καρούτσου Ελένη</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/%CE%9A%CE%B1%CF%81%CE%BF%CF%8D%CF%84%CF%83%CE%BF%CF%85_%CE%95%CE%BB%CE%AD%CE%BD%CE%B7"/>
				<updated>2026-01-13T12:43:16Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[Enhancing offshore wind Resource assessment through neural network-based HF radar data analysis]]&lt;br /&gt;
* [[Mapping Wind Turbine Distribution in Forest Areas of China Using Deep Learning Methods.]]&lt;br /&gt;
* [[Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data]]&lt;br /&gt;
* [[Application of remote sensing for critical damage assessment and strong motion analysis in the Baghjan oil blowout disaster, Tinsukia, Assam]]&lt;br /&gt;
* [[CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[category:ΔΠΜΣ &amp;quot;Περιβάλλον &amp;amp; Ανάπτυξη&amp;quot; (Αθήνα) ]]&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:42:36Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: ARCHIT.png | thumb | '''Fig 1.''' Overall network architecture of the proposed CBENet framework.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: EVAL.png | thumb | '''Fig 2.''' Evalutation Criteria.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: OIL.png | thumb | '''Fig 3.''' Oil spill detection results of five detection methods on massive oil spill areas.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: DETECT.png | thumb | '''Table 1''' Average evaluations of the five detection methods on the overall test set.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first experiment was to assess the oil spill detection capabilities of the CBNet by using microwave remote sensing datasets which includes 1274 SAR images. The training dataset contained 1019 images and the test set contained 255 images that check the accuracy of the training dataset. These results were compared to four other typical deep learning detection methods. This comparison was conducted using four metrics: Precision, recall, F1-score, and intersection over union. These computations are shown in Figure 2. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows the observation image of four oil spills and their ground truth detection result. In other words, the bottom row shows what the detection methods should show if they are effective enough. Figure 4 shows what the five detection results were. Based upon these results, it is clear that the CBNet works exceptionally compared to the other models. For example, image 3c shows an oil spill with a thin and intricate portion to the left of the main portion. Only the CBNet model was able to successfully detect this area. The areas within the red circle shows other details the CBNet model was able to capture that the other models did not. Quantitative evaluations were also conducted for the five models based on the metrics shown in Figure 2. This data is shown in Table 1. CBNet outperformed all the other models in all methods (precision, F1-score, and IoU) except for Recall where MCAN had a slightly better result. Additionally, the CBNet with boundary-enhanced loss function (Lbe) performed better than CBNet that was trained only with pixel-level loss values (Lce). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CBNet is an effective way to detect oil spills from SAR observation images. This model outperforms other models because it is able to better handle blurry boundaries. This allows for results that are more accurate with details that other models may miss. Furthermore, the inclusion of a boundary-enhanced loss function also improves the performance of the CBNet. The accurate detection of oil spills is important for the mitigation of pollution and damage to marine life. If an oil spill’s size is over or underestimated, it may affect clean-up and mitigation procedures.&lt;br /&gt;
&lt;br /&gt;
[[category:Περιβαλλοντικές Επιπτώσεις]]&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:37:29Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: ARCHIT.png | thumb | '''Fig 1.''' Overall network architecture of the proposed CBENet framework.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: EVAL.png | thumb | '''Fig 2.''' Evalutation Criteria.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: OIL.png | thumb | '''Fig 3.''' Oil spill detection results of five detection methods on massive oil spill areas.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: DETECT.png | thumb | '''Table 1''' Average evaluations of the five detection methods on the overall test set.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first experiment was to assess the oil spill detection capabilities of the CBNet by using microwave remote sensing datasets which includes 1274 SAR images. The training dataset contained 1019 images and the test set contained 255 images that check the accuracy of the training dataset. These results were compared to four other typical deep learning detection methods. This comparison was conducted using four metrics: Precision, recall, F1-score, and intersection over union. These computations are shown in Figure 2. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows the observation image of four oil spills and their ground truth detection result. In other words, the bottom row shows what the detection methods should show if they are effective enough. Figure 4 shows what the five detection results were. Based upon these results, it is clear that the CBNet works exceptionally compared to the other models. For example, image 3c shows an oil spill with a thin and intricate portion to the left of the main portion. Only the CBNet model was able to successfully detect this area. The areas within the red circle shows other details the CBNet model was able to capture that the other models did not. Quantitative evaluations were also conducted for the five models based on the metrics shown in Figure 2. This data is shown in Table 1. CBNet outperformed all the other models in all methods (precision, F1-score, and IoU) except for Recall where MCAN had a slightly better result. Additionally, the CBNet with boundary-enhanced loss function (Lbe) performed better than CBNet that was trained only with pixel-level loss values (Lce). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CBNet is an effective way to detect oil spills from SAR observation images. This model outperforms other models because it is able to better handle blurry boundaries. This allows for results that are more accurate with details that other models may miss. Furthermore, the inclusion of a boundary-enhanced loss function also improves the performance of the CBNet. The accurate detection of oil spills is important for the mitigation of pollution and damage to marine life. If an oil spill’s size is over or underestimated, it may affect clean-up and mitigation procedures.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:35:58Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: ARCHIT.png | thumb | '''Fig 1.''' Overall network architecture of the proposed CBENet framework.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: EVAL.png | thumb | '''Fig 2.''' Evalutation Criteria.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: OIL.png | thumb | '''Fig 3.''' Oil spill detection results of five detection methods on massive oil spill areas.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first experiment was to assess the oil spill detection capabilities of the CBNet by using microwave remote sensing datasets which includes 1274 SAR images. The training dataset contained 1019 images and the test set contained 255 images that check the accuracy of the training dataset. These results were compared to four other typical deep learning detection methods. This comparison was conducted using four metrics: Precision, recall, F1-score, and intersection over union. These computations are shown in Figure 2. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows the observation image of four oil spills and their ground truth detection result. In other words, the bottom row shows what the detection methods should show if they are effective enough. Figure 4 shows what the five detection results were. Based upon these results, it is clear that the CBNet works exceptionally compared to the other models. For example, image 3c shows an oil spill with a thin and intricate portion to the left of the main portion. Only the CBNet model was able to successfully detect this area. The areas within the red circle shows other details the CBNet model was able to capture that the other models did not. Quantitative evaluations were also conducted for the five models based on the metrics shown in Figure 2. This data is shown in Table 1. CBNet outperformed all the other models in all methods (precision, F1-score, and IoU) except for Recall where MCAN had a slightly better result. Additionally, the CBNet with boundary-enhanced loss function (Lbe) performed better than CBNet that was trained only with pixel-level loss values (Lce). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CBNet is an effective way to detect oil spills from SAR observation images. This model outperforms other models because it is able to better handle blurry boundaries. This allows for results that are more accurate with details that other models may miss. Furthermore, the inclusion of a boundary-enhanced loss function also improves the performance of the CBNet. The accurate detection of oil spills is important for the mitigation of pollution and damage to marine life. If an oil spill’s size is over or underestimated, it may affect clean-up and mitigation procedures.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:EVAL.png</id>
		<title>Αρχείο:EVAL.png</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:EVAL.png"/>
				<updated>2026-01-13T12:34:15Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:34:05Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: ARCHIT.png | thumb | '''Fig 1.''' Overall network architecture of the proposed CBENet framework.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: EVAL.png | thumb | '''Fig 2.''' Evalutation Criteria.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first experiment was to assess the oil spill detection capabilities of the CBNet by using microwave remote sensing datasets which includes 1274 SAR images. The training dataset contained 1019 images and the test set contained 255 images that check the accuracy of the training dataset. These results were compared to four other typical deep learning detection methods. This comparison was conducted using four metrics: Precision, recall, F1-score, and intersection over union. These computations are shown in Figure 2. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows the observation image of four oil spills and their ground truth detection result. In other words, the bottom row shows what the detection methods should show if they are effective enough. Figure 4 shows what the five detection results were. Based upon these results, it is clear that the CBNet works exceptionally compared to the other models. For example, image 3c shows an oil spill with a thin and intricate portion to the left of the main portion. Only the CBNet model was able to successfully detect this area. The areas within the red circle shows other details the CBNet model was able to capture that the other models did not. Quantitative evaluations were also conducted for the five models based on the metrics shown in Figure 2. This data is shown in Table 1. CBNet outperformed all the other models in all methods (precision, F1-score, and IoU) except for Recall where MCAN had a slightly better result. Additionally, the CBNet with boundary-enhanced loss function (Lbe) performed better than CBNet that was trained only with pixel-level loss values (Lce). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CBNet is an effective way to detect oil spills from SAR observation images. This model outperforms other models because it is able to better handle blurry boundaries. This allows for results that are more accurate with details that other models may miss. Furthermore, the inclusion of a boundary-enhanced loss function also improves the performance of the CBNet. The accurate detection of oil spills is important for the mitigation of pollution and damage to marine life. If an oil spill’s size is over or underestimated, it may affect clean-up and mitigation procedures.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:32:01Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: ARCHIT.png | thumb | '''Fig 1.''' Overall network architecture of the proposed CBENet framework.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first experiment was to assess the oil spill detection capabilities of the CBNet by using microwave remote sensing datasets which includes 1274 SAR images. The training dataset contained 1019 images and the test set contained 255 images that check the accuracy of the training dataset. These results were compared to four other typical deep learning detection methods. This comparison was conducted using four metrics: Precision, recall, F1-score, and intersection over union. These computations are shown in Figure 2. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows the observation image of four oil spills and their ground truth detection result. In other words, the bottom row shows what the detection methods should show if they are effective enough. Figure 4 shows what the five detection results were. Based upon these results, it is clear that the CBNet works exceptionally compared to the other models. For example, image 3c shows an oil spill with a thin and intricate portion to the left of the main portion. Only the CBNet model was able to successfully detect this area. The areas within the red circle shows other details the CBNet model was able to capture that the other models did not. Quantitative evaluations were also conducted for the five models based on the metrics shown in Figure 2. This data is shown in Table 1. CBNet outperformed all the other models in all methods (precision, F1-score, and IoU) except for Recall where MCAN had a slightly better result. Additionally, the CBNet with boundary-enhanced loss function (Lbe) performed better than CBNet that was trained only with pixel-level loss values (Lce). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CBNet is an effective way to detect oil spills from SAR observation images. This model outperforms other models because it is able to better handle blurry boundaries. This allows for results that are more accurate with details that other models may miss. Furthermore, the inclusion of a boundary-enhanced loss function also improves the performance of the CBNet. The accurate detection of oil spills is important for the mitigation of pollution and damage to marine life. If an oil spill’s size is over or underestimated, it may affect clean-up and mitigation procedures.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:ARCHIT.png</id>
		<title>Αρχείο:ARCHIT.png</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:ARCHIT.png"/>
				<updated>2026-01-13T12:31:38Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:31:27Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: ARCHIT.png | thumb | '''Fig 1.''' Distribution and labeling of datasets. (a) The spatial distribution of training set, validation set, testing set, and evaluation set. (b) Schematic diagram of data labelling.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first experiment was to assess the oil spill detection capabilities of the CBNet by using microwave remote sensing datasets which includes 1274 SAR images. The training dataset contained 1019 images and the test set contained 255 images that check the accuracy of the training dataset. These results were compared to four other typical deep learning detection methods. This comparison was conducted using four metrics: Precision, recall, F1-score, and intersection over union. These computations are shown in Figure 2. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows the observation image of four oil spills and their ground truth detection result. In other words, the bottom row shows what the detection methods should show if they are effective enough. Figure 4 shows what the five detection results were. Based upon these results, it is clear that the CBNet works exceptionally compared to the other models. For example, image 3c shows an oil spill with a thin and intricate portion to the left of the main portion. Only the CBNet model was able to successfully detect this area. The areas within the red circle shows other details the CBNet model was able to capture that the other models did not. Quantitative evaluations were also conducted for the five models based on the metrics shown in Figure 2. This data is shown in Table 1. CBNet outperformed all the other models in all methods (precision, F1-score, and IoU) except for Recall where MCAN had a slightly better result. Additionally, the CBNet with boundary-enhanced loss function (Lbe) performed better than CBNet that was trained only with pixel-level loss values (Lce). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CBNet is an effective way to detect oil spills from SAR observation images. This model outperforms other models because it is able to better handle blurry boundaries. This allows for results that are more accurate with details that other models may miss. Furthermore, the inclusion of a boundary-enhanced loss function also improves the performance of the CBNet. The accurate detection of oil spills is important for the mitigation of pollution and damage to marine life. If an oil spill’s size is over or underestimated, it may affect clean-up and mitigation procedures.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:29:10Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first experiment was to assess the oil spill detection capabilities of the CBNet by using microwave remote sensing datasets which includes 1274 SAR images. The training dataset contained 1019 images and the test set contained 255 images that check the accuracy of the training dataset. These results were compared to four other typical deep learning detection methods. This comparison was conducted using four metrics: Precision, recall, F1-score, and intersection over union. These computations are shown in Figure 2. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows the observation image of four oil spills and their ground truth detection result. In other words, the bottom row shows what the detection methods should show if they are effective enough. Figure 4 shows what the five detection results were. Based upon these results, it is clear that the CBNet works exceptionally compared to the other models. For example, image 3c shows an oil spill with a thin and intricate portion to the left of the main portion. Only the CBNet model was able to successfully detect this area. The areas within the red circle shows other details the CBNet model was able to capture that the other models did not. Quantitative evaluations were also conducted for the five models based on the metrics shown in Figure 2. This data is shown in Table 1. CBNet outperformed all the other models in all methods (precision, F1-score, and IoU) except for Recall where MCAN had a slightly better result. Additionally, the CBNet with boundary-enhanced loss function (Lbe) performed better than CBNet that was trained only with pixel-level loss values (Lce). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CBNet is an effective way to detect oil spills from SAR observation images. This model outperforms other models because it is able to better handle blurry boundaries. This allows for results that are more accurate with details that other models may miss. Furthermore, the inclusion of a boundary-enhanced loss function also improves the performance of the CBNet. The accurate detection of oil spills is important for the mitigation of pollution and damage to marine life. If an oil spill’s size is over or underestimated, it may affect clean-up and mitigation procedures.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:28:45Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first experiment was to assess the oil spill detection capabilities of the CBNet by using microwave remote sensing datasets which includes 1274 SAR images. The training dataset contained 1019 images and the test set contained 255 images that check the accuracy of the training dataset. These results were compared to four other typical deep learning detection methods. This comparison was conducted using four metrics: Precision, recall, F1-score, and intersection over union. These computations are shown in Figure 2. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows the observation image of four oil spills and their ground truth detection result. In other words, the bottom row shows what the detection methods should show if they are effective enough. Figure 4 shows what the five detection results were. Based upon these results, it is clear that the CBNet works exceptionally compared to the other models. For example, image 3c shows an oil spill with a thin and intricate portion to the left of the main portion. Only the CBNet model was able to successfully detect this area. The areas within the red circle shows other details the CBNet model was able to capture that the other models did not. Quantitative evaluations were also conducted for the five models based on the metrics shown in Figure 2. This data is shown in Table 1. CBNet outperformed all the other models in all methods (precision, F1-score, and IoU) except for Recall where MCAN had a slightly better result. Additionally, the CBNet with boundary-enhanced loss function (Lbe) performed better than CBNet that was trained only with pixel-level loss values (Lce). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CBNet is an effective way to detect oil spills from SAR observation images. This model outperforms other models because it is able to better handle blurry boundaries. This allows for results that are more accurate with details that other models may miss. Furthermore, the inclusion of a boundary-enhanced loss function also improves the performance of the CBNet. The accurate detection of oil spills is important for the mitigation of pollution and damage to marine life. If an oil spill’s size is over or underestimated, it may affect clean-up and mitigation procedures.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:28:29Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first experiment was to assess the oil spill detection capabilities of the CBNet by using microwave remote sensing datasets which includes 1274 SAR images. The training dataset contained 1019 images and the test set contained 255 images that check the accuracy of the training dataset. These results were compared to four other typical deep learning detection methods. This comparison was conducted using four metrics: Precision, recall, F1-score, and intersection over union. These computations are shown in Figure 2. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows the observation image of four oil spills and their ground truth detection result. In other words, the bottom row shows what the detection methods should show if they are effective enough. Figure 4 shows what the five detection results were. Based upon these results, it is clear that the CBNet works exceptionally compared to the other models. For example, image 3c shows an oil spill with a thin and intricate portion to the left of the main portion. Only the CBNet model was able to successfully detect this area. The areas within the red circle shows other details the CBNet model was able to capture that the other models did not. Quantitative evaluations were also conducted for the five models based on the metrics shown in Figure 2. This data is shown in Table 1. CBNet outperformed all the other models in all methods (precision, F1-score, and IoU) except for Recall where MCAN had a slightly better result. Additionally, the CBNet with boundary-enhanced loss function (Lbe) performed better than CBNet that was trained only with pixel-level loss values (Lce).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Conclusion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CBNet is an effective way to detect oil spills from SAR observation images. This model outperforms other models because it is able to better handle blurry boundaries. This allows for results that are more accurate with details that other models may miss. Furthermore, the inclusion of a boundary-enhanced loss function also improves the performance of the CBNet. The accurate detection of oil spills is important for the mitigation of pollution and damage to marine life. If an oil spill’s size is over or underestimated, it may affect clean-up and mitigation procedures.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:27:58Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first experiment was to assess the oil spill detection capabilities of the CBNet by using microwave remote sensing datasets which includes 1274 SAR images. The training dataset contained 1019 images and the test set contained 255 images that check the accuracy of the training dataset. These results were compared to four other typical deep learning detection methods. This comparison was conducted using four metrics: Precision, recall, F1-score, and intersection over union. These computations are shown in Figure 2. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Figure 3 shows the observation image of four oil spills and their ground truth detection result. In other words, the bottom row shows what the detection methods should show if they are effective enough. Figure 4 shows what the five detection results were. Based upon these results, it is clear that the CBNet works exceptionally compared to the other models. For example, image 3c shows an oil spill with a thin and intricate portion to the left of the main portion. Only the CBNet model was able to successfully detect this area. The areas within the red circle shows other details the CBNet model was able to capture that the other models did not. Quantitative evaluations were also conducted for the five models based on the metrics shown in Figure 2. This data is shown in Table 1. CBNet outperformed all the other models in all methods (precision, F1-score, and IoU) except for Recall where MCAN had a slightly better result. Additionally, the CBNet with boundary-enhanced loss function (Lbe) performed better than CBNet that was trained only with pixel-level loss values (Lce).&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:26:50Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:26:38Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:26:00Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:25:48Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:25:31Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:25:10Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/p&amp;gt;The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:24:51Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt; CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:24:43Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:23:58Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:23:39Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:23:18Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;CBENet was developed as an encoder-decoder architecture. What this means is that an encoder takes apart spatial features from the oil spill image while simultaneously learning and understanding them. The decoder has a similar architecture to the encoder and takes the fused contextual features and produces a detection result as an oil spill image. The general idea of how this architecture works is shown in Figure 1. Between the encoder and decoder is the contextual fusion model which is a vital connection. Essentially this takes the inputs from the encoder and is able to provide “context” to the features extracted. This doesn’t only look at individual pixels but rather are larger areas simultaneously. The fused contextual features are then given to the decoder. &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The following section highlights the formulations used to train CBENet. “Lbe” was defined as the boundary-enchanced loss function, which ensures higher accuracy when detecting the boundaries of an oil spill. It combines two parameters: pixel-level loss values, which is the error of all pixels in the image, and boundary-level loss values, which computes errors of the pixels on the border of the spill. CBENet is effective for three reasons, the first being that it can create a multiscale representation of oil spill features. The second is that the architecture enhances the fusion of contextual features. Lastly, detection becomes more accurate when the boundary-enhanced loss function is included.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:22:29Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Oil spills can be a major issue in marine environments, which is why it is important to be able to detect them early. Detection typically entails the use of microwave remote sensing images, specifically synthetic aperture radar (SAR). This technique uses radar pulses from aircraft and can identify oil spills with high-resolution imagery, nonstop, and is resistant to weather conditions such as precipitation and fog. SAR is often paired with deep learning models to increase its effectiveness. The researchers listed various models that were tested in previous studies, as well as the issues they overcame. Some of the technologies they mentioned are deep convolutional neural networks (DCNNs), U-shaped networks, multiscale conditional adversarial network (MCAN), generative adversarial networks (GANs), and many other models and methodologies for detecting oil spills. This paper created a contextual and boundary-enhanced network (CBENet) to analyze the SAR images. The goal of this technique is to address certain issues regarding the quality of the SAR images. For example, the images don’t identify oil spills well at varied scales. Additionally, boundaries of the spills are often blurry, affecting the accuracy of detecting them. The paper finds that CBENet is an effective technique and was validated via various qualitative and quantitative evaluations.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing</id>
		<title>CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/CBENet:_contextual_and_boundary-enhanced_network_for_oil_spill_detection_via_microwave_remote_sensing"/>
				<updated>2026-01-13T12:21:40Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: Νέα σελίδα με '''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt; '''Authors...'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;CBENet: contextual and boundary-enhanced network for oil spill detection via microwave remote sensing&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mengmeng Di , Xinnan Di , Huiyao Xiao , Ying Gao and Yongqing Li''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1007/s44295-025-00056-5 &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 27 February 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T12:18:00Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: MODEL.png | thumb | '''Fig 1.''' Model comparisons for the linear regression model (Linear) and random forest model (RF) using the root mean squared error (RMSE) for the test set (20%) and the full dataset (including the test set). Note that the R2 indicates the explained variance for the full dataset including the test set.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: BOXPLOTS.png | thumb | '''Fig 2.'''Boxplots of the residual NDVI values from each RF model of Table 2 within clusters 1 and 2 originating from the unsupervised clustering of the biodiversity samples.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: GENERA.png | thumb | '''Fig 3.''' A Visualization of the 30 most abundant taxonomies per crop for the first cluster. B Visualization of the 30 most abundant taxonomies per crop for the second cluster.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods and Data''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first step of the methodology was to download the satellite images and apply the NDVI. Low NDVI depicts bad crop health, while high NDVI depicts good crop health. Then, the NDVI values were adjusted for abiotic (non-living) factors by removing their influence through a random forest model. Next, the NDVI values were analyzed based on different fungal soil microbiomes. &lt;br /&gt;
The abiotic data, which were used to adjust the NDVI, came from various sources. Topsoil composition data were obtained from the LUCAS 2018 topsoil dataset, but only the three most prevalent crop types were used: wheat, barley, and maize. The data was further filtered by removing instances during winter months since most of the images were covered in snow. Climate data came from the ERA5 Copernicus dataset, and information related to soil properties was obtained, such as soil temperature, soil moisture, soil type, and air temperature. These variables were then linked to each NDVI observation. This step was vital to make sure that any recorded changes in the NDVI values resulted exclusively from fungi and not any other non-living factors. If too many parameters were changing simultaneously without adjustments, no reasonable correlation could be examined. The last dataset used was the LUCAS biodiversity dataset. This dataset contains a total of 885 biosamples, of which 115 were for wheat, barley, and maize crops, during the specific time period. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first results of this paper demonstrated which model, linear regression or random forest (RF), predicted the NDVI values better. As shown in Table 1, the RF model outperformed the linear regression model since it had higher R^2 values and lower RMSE values. The higher R^2 values indicate a greater explained variance. These models used 16 parameters to do the calculations, obtained from the data sets described in the methods section. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the RF models outperformed the linear regression model, they were used to carry out the rest of the fungal soil modeling. Two major clusters of fungal genera were identified through unsupervised clustering based on statistical groupings of their properties. Based on the boxplots from Figure 2, it is clear that the residual NDVI values were higher for cluster 1 than cluster 2, indicating better plant health. &lt;br /&gt;
&lt;br /&gt;
To gain a better understanding of which fungi may be contributing to the NDVI values, the frequency of the fungal genera was visualized in Fig. 3. The researchers explain that fungi such as Tomentella and Mortierella, which typically are associated with plant health, are more abundant in (wheat) cluster 1. ** Based on my own observation, however, it seems that both clusters have very similar frequency percentages of Mortierella so I fail to see how it is an indication of plant health. Fusarium, on the other hand, was very prevalent in cluster 2, while not appearing at all in the top 30 of cluster 2 genera. This may indicate that Fusarium is a pathogenic genus, contributing to poorer plant health and, consequently, lower NDVI values. A bootstrap resampling approach helped identify 99 and 114 genera in clusters 1 and 2, respectively, as non-outlier data. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;'''Discussion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	For the discussion section, the researchers compared their findings to the findings of past literature and explained any possible limitations of their models. Some areas of the results that agreed with the literature are the following: NDVI is affected by the seasons, observing a maximum NDVI around harvest months. Additionally, soil carbon content has a non-linear correlation with NDVI. Furthermore, meteorological variables such as temperatures and soil moisture have the highest importance with NDVI. Lastly, the researchers acknowledge that their results confirm past results regarding their abiotic model to remove non-living influence from the NDVI response. &lt;br /&gt;
	There were several limitations to the study that were addressed by the researchers. First, their meteorological data is confined to certain ranges, such as temperature ranges between 280K and 290K. This means that their abiotic model cannot handle extreme weather well. Additionally, biological data were only obtained from 2018, so any changes associated with time were not properly observed. For future studies, using larger geospatial data and larger time periods may benefit research. Also, using satellites with high resolution may provide noteworthy results since they may be able to map biotic, living effects better. Researchers also address that testing fungal interactions both in vivo and in vitro may be useful. The example they give is with the Mortierella, and how there have been no known field experiments on how different levels of the fungus can affect the NDVI. They then give an idea of how such experiments could be carried out. Another potential candidate for future research in the field is the use of species-level taxonomy for NDVI data, since this particular paper only used genus-level taxonomy, which is broader.&lt;br /&gt;
&lt;br /&gt;
[[category:Οικολογία]]&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:GENERA.png</id>
		<title>Αρχείο:GENERA.png</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:GENERA.png"/>
				<updated>2026-01-13T12:15:48Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T12:15:38Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: MODEL.png | thumb | '''Fig 1.''' Model comparisons for the linear regression model (Linear) and random forest model (RF) using the root mean squared error (RMSE) for the test set (20%) and the full dataset (including the test set). Note that the R2 indicates the explained variance for the full dataset including the test set.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: BOXPLOTS.png | thumb | '''Fig 2.'''Boxplots of the residual NDVI values from each RF model of Table 2 within clusters 1 and 2 originating from the unsupervised clustering of the biodiversity samples.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: GENERA.png | thumb | '''Fig 3.''' A Visualization of the 30 most abundant taxonomies per crop for the first cluster. B Visualization of the 30 most abundant taxonomies per crop for the second cluster.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods and Data''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first step of the methodology was to download the satellite images and apply the NDVI. Low NDVI depicts bad crop health, while high NDVI depicts good crop health. Then, the NDVI values were adjusted for abiotic (non-living) factors by removing their influence through a random forest model. Next, the NDVI values were analyzed based on different fungal soil microbiomes. &lt;br /&gt;
The abiotic data, which were used to adjust the NDVI, came from various sources. Topsoil composition data were obtained from the LUCAS 2018 topsoil dataset, but only the three most prevalent crop types were used: wheat, barley, and maize. The data was further filtered by removing instances during winter months since most of the images were covered in snow. Climate data came from the ERA5 Copernicus dataset, and information related to soil properties was obtained, such as soil temperature, soil moisture, soil type, and air temperature. These variables were then linked to each NDVI observation. This step was vital to make sure that any recorded changes in the NDVI values resulted exclusively from fungi and not any other non-living factors. If too many parameters were changing simultaneously without adjustments, no reasonable correlation could be examined. The last dataset used was the LUCAS biodiversity dataset. This dataset contains a total of 885 biosamples, of which 115 were for wheat, barley, and maize crops, during the specific time period. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first results of this paper demonstrated which model, linear regression or random forest (RF), predicted the NDVI values better. As shown in Table 1, the RF model outperformed the linear regression model since it had higher R^2 values and lower RMSE values. The higher R^2 values indicate a greater explained variance. These models used 16 parameters to do the calculations, obtained from the data sets described in the methods section. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the RF models outperformed the linear regression model, they were used to carry out the rest of the fungal soil modeling. Two major clusters of fungal genera were identified through unsupervised clustering based on statistical groupings of their properties. Based on the boxplots from Figure 2, it is clear that the residual NDVI values were higher for cluster 1 than cluster 2, indicating better plant health. &lt;br /&gt;
&lt;br /&gt;
To gain a better understanding of which fungi may be contributing to the NDVI values, the frequency of the fungal genera was visualized in Fig. 3. The researchers explain that fungi such as Tomentella and Mortierella, which typically are associated with plant health, are more abundant in (wheat) cluster 1. ** Based on my own observation, however, it seems that both clusters have very similar frequency percentages of Mortierella so I fail to see how it is an indication of plant health. Fusarium, on the other hand, was very prevalent in cluster 2, while not appearing at all in the top 30 of cluster 2 genera. This may indicate that Fusarium is a pathogenic genus, contributing to poorer plant health and, consequently, lower NDVI values. A bootstrap resampling approach helped identify 99 and 114 genera in clusters 1 and 2, respectively, as non-outlier data. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;'''Discussion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	For the discussion section, the researchers compared their findings to the findings of past literature and explained any possible limitations of their models. Some areas of the results that agreed with the literature are the following: NDVI is affected by the seasons, observing a maximum NDVI around harvest months. Additionally, soil carbon content has a non-linear correlation with NDVI. Furthermore, meteorological variables such as temperatures and soil moisture have the highest importance with NDVI. Lastly, the researchers acknowledge that their results confirm past results regarding their abiotic model to remove non-living influence from the NDVI response. &lt;br /&gt;
	There were several limitations to the study that were addressed by the researchers. First, their meteorological data is confined to certain ranges, such as temperature ranges between 280K and 290K. This means that their abiotic model cannot handle extreme weather well. Additionally, biological data were only obtained from 2018, so any changes associated with time were not properly observed. For future studies, using larger geospatial data and larger time periods may benefit research. Also, using satellites with high resolution may provide noteworthy results since they may be able to map biotic, living effects better. Researchers also address that testing fungal interactions both in vivo and in vitro may be useful. The example they give is with the Mortierella, and how there have been no known field experiments on how different levels of the fungus can affect the NDVI. They then give an idea of how such experiments could be carried out. Another potential candidate for future research in the field is the use of species-level taxonomy for NDVI data, since this particular paper only used genus-level taxonomy, which is broader.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:BOXPLOTS.png</id>
		<title>Αρχείο:BOXPLOTS.png</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:BOXPLOTS.png"/>
				<updated>2026-01-13T12:13:30Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T12:13:15Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: MODEL.png | thumb | '''Fig 1.''' Model comparisons for the linear regression model (Linear) and random forest model (RF) using the root mean squared error (RMSE) for the test set (20%) and the full dataset (including the test set). Note that the R2 indicates the explained variance for the full dataset including the test set.]]&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: BOXPLOTS.png | thumb | '''Fig 2.'''Boxplots of the residual NDVI values from each RF model of Table 2 within clusters 1 and 2 originating from the unsupervised clustering of the biodiversity samples.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods and Data''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first step of the methodology was to download the satellite images and apply the NDVI. Low NDVI depicts bad crop health, while high NDVI depicts good crop health. Then, the NDVI values were adjusted for abiotic (non-living) factors by removing their influence through a random forest model. Next, the NDVI values were analyzed based on different fungal soil microbiomes. &lt;br /&gt;
The abiotic data, which were used to adjust the NDVI, came from various sources. Topsoil composition data were obtained from the LUCAS 2018 topsoil dataset, but only the three most prevalent crop types were used: wheat, barley, and maize. The data was further filtered by removing instances during winter months since most of the images were covered in snow. Climate data came from the ERA5 Copernicus dataset, and information related to soil properties was obtained, such as soil temperature, soil moisture, soil type, and air temperature. These variables were then linked to each NDVI observation. This step was vital to make sure that any recorded changes in the NDVI values resulted exclusively from fungi and not any other non-living factors. If too many parameters were changing simultaneously without adjustments, no reasonable correlation could be examined. The last dataset used was the LUCAS biodiversity dataset. This dataset contains a total of 885 biosamples, of which 115 were for wheat, barley, and maize crops, during the specific time period. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first results of this paper demonstrated which model, linear regression or random forest (RF), predicted the NDVI values better. As shown in Table 1, the RF model outperformed the linear regression model since it had higher R^2 values and lower RMSE values. The higher R^2 values indicate a greater explained variance. These models used 16 parameters to do the calculations, obtained from the data sets described in the methods section. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the RF models outperformed the linear regression model, they were used to carry out the rest of the fungal soil modeling. Two major clusters of fungal genera were identified through unsupervised clustering based on statistical groupings of their properties. Based on the boxplots from Figure 2, it is clear that the residual NDVI values were higher for cluster 1 than cluster 2, indicating better plant health. &lt;br /&gt;
&lt;br /&gt;
To gain a better understanding of which fungi may be contributing to the NDVI values, the frequency of the fungal genera was visualized in Fig. 3. The researchers explain that fungi such as Tomentella and Mortierella, which typically are associated with plant health, are more abundant in (wheat) cluster 1. ** Based on my own observation, however, it seems that both clusters have very similar frequency percentages of Mortierella so I fail to see how it is an indication of plant health. Fusarium, on the other hand, was very prevalent in cluster 2, while not appearing at all in the top 30 of cluster 2 genera. This may indicate that Fusarium is a pathogenic genus, contributing to poorer plant health and, consequently, lower NDVI values. A bootstrap resampling approach helped identify 99 and 114 genera in clusters 1 and 2, respectively, as non-outlier data. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;'''Discussion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	For the discussion section, the researchers compared their findings to the findings of past literature and explained any possible limitations of their models. Some areas of the results that agreed with the literature are the following: NDVI is affected by the seasons, observing a maximum NDVI around harvest months. Additionally, soil carbon content has a non-linear correlation with NDVI. Furthermore, meteorological variables such as temperatures and soil moisture have the highest importance with NDVI. Lastly, the researchers acknowledge that their results confirm past results regarding their abiotic model to remove non-living influence from the NDVI response. &lt;br /&gt;
	There were several limitations to the study that were addressed by the researchers. First, their meteorological data is confined to certain ranges, such as temperature ranges between 280K and 290K. This means that their abiotic model cannot handle extreme weather well. Additionally, biological data were only obtained from 2018, so any changes associated with time were not properly observed. For future studies, using larger geospatial data and larger time periods may benefit research. Also, using satellites with high resolution may provide noteworthy results since they may be able to map biotic, living effects better. Researchers also address that testing fungal interactions both in vivo and in vitro may be useful. The example they give is with the Mortierella, and how there have been no known field experiments on how different levels of the fungus can affect the NDVI. They then give an idea of how such experiments could be carried out. Another potential candidate for future research in the field is the use of species-level taxonomy for NDVI data, since this particular paper only used genus-level taxonomy, which is broader.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:MODEL.png</id>
		<title>Αρχείο:MODEL.png</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/%CE%91%CF%81%CF%87%CE%B5%CE%AF%CE%BF:MODEL.png"/>
				<updated>2026-01-13T11:57:35Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:56:54Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Αρχείο: MODEL.png | thumb | '''Fig 1.''' Model comparisons for the linear regression model (Linear) and random forest model (RF) using the root mean squared error (RMSE) for the test set (20%) and the full dataset (including the test set). Note that the R2 indicates the explained variance for the full dataset including the test set.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods and Data''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first step of the methodology was to download the satellite images and apply the NDVI. Low NDVI depicts bad crop health, while high NDVI depicts good crop health. Then, the NDVI values were adjusted for abiotic (non-living) factors by removing their influence through a random forest model. Next, the NDVI values were analyzed based on different fungal soil microbiomes. &lt;br /&gt;
The abiotic data, which were used to adjust the NDVI, came from various sources. Topsoil composition data were obtained from the LUCAS 2018 topsoil dataset, but only the three most prevalent crop types were used: wheat, barley, and maize. The data was further filtered by removing instances during winter months since most of the images were covered in snow. Climate data came from the ERA5 Copernicus dataset, and information related to soil properties was obtained, such as soil temperature, soil moisture, soil type, and air temperature. These variables were then linked to each NDVI observation. This step was vital to make sure that any recorded changes in the NDVI values resulted exclusively from fungi and not any other non-living factors. If too many parameters were changing simultaneously without adjustments, no reasonable correlation could be examined. The last dataset used was the LUCAS biodiversity dataset. This dataset contains a total of 885 biosamples, of which 115 were for wheat, barley, and maize crops, during the specific time period. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first results of this paper demonstrated which model, linear regression or random forest (RF), predicted the NDVI values better. As shown in Table 1, the RF model outperformed the linear regression model since it had higher R^2 values and lower RMSE values. The higher R^2 values indicate a greater explained variance. These models used 16 parameters to do the calculations, obtained from the data sets described in the methods section. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the RF models outperformed the linear regression model, they were used to carry out the rest of the fungal soil modeling. Two major clusters of fungal genera were identified through unsupervised clustering based on statistical groupings of their properties. Based on the boxplots from Figure 2, it is clear that the residual NDVI values were higher for cluster 1 than cluster 2, indicating better plant health. &lt;br /&gt;
&lt;br /&gt;
To gain a better understanding of which fungi may be contributing to the NDVI values, the frequency of the fungal genera was visualized in Fig. 3. The researchers explain that fungi such as Tomentella and Mortierella, which typically are associated with plant health, are more abundant in (wheat) cluster 1. ** Based on my own observation, however, it seems that both clusters have very similar frequency percentages of Mortierella so I fail to see how it is an indication of plant health. Fusarium, on the other hand, was very prevalent in cluster 2, while not appearing at all in the top 30 of cluster 2 genera. This may indicate that Fusarium is a pathogenic genus, contributing to poorer plant health and, consequently, lower NDVI values. A bootstrap resampling approach helped identify 99 and 114 genera in clusters 1 and 2, respectively, as non-outlier data. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;'''Discussion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	For the discussion section, the researchers compared their findings to the findings of past literature and explained any possible limitations of their models. Some areas of the results that agreed with the literature are the following: NDVI is affected by the seasons, observing a maximum NDVI around harvest months. Additionally, soil carbon content has a non-linear correlation with NDVI. Furthermore, meteorological variables such as temperatures and soil moisture have the highest importance with NDVI. Lastly, the researchers acknowledge that their results confirm past results regarding their abiotic model to remove non-living influence from the NDVI response. &lt;br /&gt;
	There were several limitations to the study that were addressed by the researchers. First, their meteorological data is confined to certain ranges, such as temperature ranges between 280K and 290K. This means that their abiotic model cannot handle extreme weather well. Additionally, biological data were only obtained from 2018, so any changes associated with time were not properly observed. For future studies, using larger geospatial data and larger time periods may benefit research. Also, using satellites with high resolution may provide noteworthy results since they may be able to map biotic, living effects better. Researchers also address that testing fungal interactions both in vivo and in vitro may be useful. The example they give is with the Mortierella, and how there have been no known field experiments on how different levels of the fungus can affect the NDVI. They then give an idea of how such experiments could be carried out. Another potential candidate for future research in the field is the use of species-level taxonomy for NDVI data, since this particular paper only used genus-level taxonomy, which is broader.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:44:49Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods and Data''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first step of the methodology was to download the satellite images and apply the NDVI. Low NDVI depicts bad crop health, while high NDVI depicts good crop health. Then, the NDVI values were adjusted for abiotic (non-living) factors by removing their influence through a random forest model. Next, the NDVI values were analyzed based on different fungal soil microbiomes. &lt;br /&gt;
The abiotic data, which were used to adjust the NDVI, came from various sources. Topsoil composition data were obtained from the LUCAS 2018 topsoil dataset, but only the three most prevalent crop types were used: wheat, barley, and maize. The data was further filtered by removing instances during winter months since most of the images were covered in snow. Climate data came from the ERA5 Copernicus dataset, and information related to soil properties was obtained, such as soil temperature, soil moisture, soil type, and air temperature. These variables were then linked to each NDVI observation. This step was vital to make sure that any recorded changes in the NDVI values resulted exclusively from fungi and not any other non-living factors. If too many parameters were changing simultaneously without adjustments, no reasonable correlation could be examined. The last dataset used was the LUCAS biodiversity dataset. This dataset contains a total of 885 biosamples, of which 115 were for wheat, barley, and maize crops, during the specific time period. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first results of this paper demonstrated which model, linear regression or random forest (RF), predicted the NDVI values better. As shown in Table 1, the RF model outperformed the linear regression model since it had higher R^2 values and lower RMSE values. The higher R^2 values indicate a greater explained variance. These models used 16 parameters to do the calculations, obtained from the data sets described in the methods section. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the RF models outperformed the linear regression model, they were used to carry out the rest of the fungal soil modeling. Two major clusters of fungal genera were identified through unsupervised clustering based on statistical groupings of their properties. Based on the boxplots from Figure 2, it is clear that the residual NDVI values were higher for cluster 1 than cluster 2, indicating better plant health. &lt;br /&gt;
&lt;br /&gt;
To gain a better understanding of which fungi may be contributing to the NDVI values, the frequency of the fungal genera was visualized in Fig. 3. The researchers explain that fungi such as Tomentella and Mortierella, which typically are associated with plant health, are more abundant in (wheat) cluster 1. ** Based on my own observation, however, it seems that both clusters have very similar frequency percentages of Mortierella so I fail to see how it is an indication of plant health. Fusarium, on the other hand, was very prevalent in cluster 2, while not appearing at all in the top 30 of cluster 2 genera. This may indicate that Fusarium is a pathogenic genus, contributing to poorer plant health and, consequently, lower NDVI values. A bootstrap resampling approach helped identify 99 and 114 genera in clusters 1 and 2, respectively, as non-outlier data. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;'''Discussion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	For the discussion section, the researchers compared their findings to the findings of past literature and explained any possible limitations of their models. Some areas of the results that agreed with the literature are the following: NDVI is affected by the seasons, observing a maximum NDVI around harvest months. Additionally, soil carbon content has a non-linear correlation with NDVI. Furthermore, meteorological variables such as temperatures and soil moisture have the highest importance with NDVI. Lastly, the researchers acknowledge that their results confirm past results regarding their abiotic model to remove non-living influence from the NDVI response. &lt;br /&gt;
	There were several limitations to the study that were addressed by the researchers. First, their meteorological data is confined to certain ranges, such as temperature ranges between 280K and 290K. This means that their abiotic model cannot handle extreme weather well. Additionally, biological data were only obtained from 2018, so any changes associated with time were not properly observed. For future studies, using larger geospatial data and larger time periods may benefit research. Also, using satellites with high resolution may provide noteworthy results since they may be able to map biotic, living effects better. Researchers also address that testing fungal interactions both in vivo and in vitro may be useful. The example they give is with the Mortierella, and how there have been no known field experiments on how different levels of the fungus can affect the NDVI. They then give an idea of how such experiments could be carried out. Another potential candidate for future research in the field is the use of species-level taxonomy for NDVI data, since this particular paper only used genus-level taxonomy, which is broader.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:44:27Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods and Data''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first step of the methodology was to download the satellite images and apply the NDVI. Low NDVI depicts bad crop health, while high NDVI depicts good crop health. Then, the NDVI values were adjusted for abiotic (non-living) factors by removing their influence through a random forest model. Next, the NDVI values were analyzed based on different fungal soil microbiomes. &lt;br /&gt;
The abiotic data, which were used to adjust the NDVI, came from various sources. Topsoil composition data were obtained from the LUCAS 2018 topsoil dataset, but only the three most prevalent crop types were used: wheat, barley, and maize. The data was further filtered by removing instances during winter months since most of the images were covered in snow. Climate data came from the ERA5 Copernicus dataset, and information related to soil properties was obtained, such as soil temperature, soil moisture, soil type, and air temperature. These variables were then linked to each NDVI observation. This step was vital to make sure that any recorded changes in the NDVI values resulted exclusively from fungi and not any other non-living factors. If too many parameters were changing simultaneously without adjustments, no reasonable correlation could be examined. The last dataset used was the LUCAS biodiversity dataset. This dataset contains a total of 885 biosamples, of which 115 were for wheat, barley, and maize crops, during the specific time period. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Results''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first results of this paper demonstrated which model, linear regression or random forest (RF), predicted the NDVI values better. As shown in Table 1, the RF model outperformed the linear regression model since it had higher R^2 values and lower RMSE values. The higher R^2 values indicate a greater explained variance. These models used 16 parameters to do the calculations, obtained from the data sets described in the methods section. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the RF models outperformed the linear regression model, they were used to carry out the rest of the fungal soil modeling. Two major clusters of fungal genera were identified through unsupervised clustering based on statistical groupings of their properties. Based on the boxplots from Figure 2, it is clear that the residual NDVI values were higher for cluster 1 than cluster 2, indicating better plant health. &lt;br /&gt;
&lt;br /&gt;
To gain a better understanding of which fungi may be contributing to the NDVI values, the frequency of the fungal genera was visualized in Fig. 3. The researchers explain that fungi such as Tomentella and Mortierella, which typically are associated with plant health, are more abundant in (wheat) cluster 1. ** Based on my own observation, however, it seems that both clusters have very similar frequency percentages of Mortierella so I fail to see how it is an indication of plant health. Fusarium, on the other hand, was very prevalent in cluster 2, while not appearing at all in the top 30 of cluster 2 genera. This may indicate that Fusarium is a pathogenic genus, contributing to poorer plant health and, consequently, lower NDVI values. A bootstrap resampling approach helped identify 99 and 114 genera in clusters 1 and 2, respectively, as non-outlier data. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;'''Discussion''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	For the discussion section, the researchers compared their findings to the findings of past literature and explained any possible limitations of their models. Some areas of the results that agreed with the literature are the following: NDVI is affected by the seasons, observing a maximum NDVI around harvest months. Additionally, soil carbon content has a non-linear correlation with NDVI. Furthermore, meteorological variables such as temperatures and soil moisture have the highest importance with NDVI. Lastly, the researchers acknowledge that their results confirm past results regarding their abiotic model to remove non-living influence from the NDVI response. &lt;br /&gt;
	There were several limitations to the study that were addressed by the researchers. First, their meteorological data is confined to certain ranges, such as temperature ranges between 280K and 290K. This means that their abiotic model cannot handle extreme weather well. Additionally, biological data were only obtained from 2018, so any changes associated with time were not properly observed. For future studies, using larger geospatial data and larger time periods may benefit research. Also, using satellites with high resolution may provide noteworthy results since they may be able to map biotic, living effects better. Researchers also address that testing fungal interactions both in vivo and in vitro may be useful. The example they give is with the Mortierella, and how there have been no known field experiments on how different levels of the fungus can affect the NDVI. They then give an idea of how such experiments could be carried out. Another potential candidate for future research in the field is the use of species-level taxonomy for NDVI data, since this particular paper only used genus-level taxonomy, which is broader.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:43:31Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods and Data''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first step of the methodology was to download the satellite images and apply the NDVI. Low NDVI depicts bad crop health, while high NDVI depicts good crop health. Then, the NDVI values were adjusted for abiotic (non-living) factors by removing their influence through a random forest model. Next, the NDVI values were analyzed based on different fungal soil microbiomes. &lt;br /&gt;
The abiotic data, which were used to adjust the NDVI, came from various sources. Topsoil composition data were obtained from the LUCAS 2018 topsoil dataset, but only the three most prevalent crop types were used: wheat, barley, and maize. The data was further filtered by removing instances during winter months since most of the images were covered in snow. Climate data came from the ERA5 Copernicus dataset, and information related to soil properties was obtained, such as soil temperature, soil moisture, soil type, and air temperature. These variables were then linked to each NDVI observation. This step was vital to make sure that any recorded changes in the NDVI values resulted exclusively from fungi and not any other non-living factors. If too many parameters were changing simultaneously without adjustments, no reasonable correlation could be examined. The last dataset used was the LUCAS biodiversity dataset. This dataset contains a total of 885 biosamples, of which 115 were for wheat, barley, and maize crops, during the specific time period. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Results''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first results of this paper demonstrated which model, linear regression or random forest (RF), predicted the NDVI values better. As shown in Table 1, the RF model outperformed the linear regression model since it had higher R^2 values and lower RMSE values. The higher R^2 values indicate a greater explained variance. These models used 16 parameters to do the calculations, obtained from the data sets described in the methods section. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since the RF models outperformed the linear regression model, they were used to carry out the rest of the fungal soil modeling. Two major clusters of fungal genera were identified through unsupervised clustering based on statistical groupings of their properties. Based on the boxplots from Figure 2, it is clear that the residual NDVI values were higher for cluster 1 than cluster 2, indicating better plant health. &lt;br /&gt;
&lt;br /&gt;
To gain a better understanding of which fungi may be contributing to the NDVI values, the frequency of the fungal genera was visualized in Fig. 3. The researchers explain that fungi such as Tomentella and Mortierella, which typically are associated with plant health, are more abundant in (wheat) cluster 1. ** Based on my own observation, however, it seems that both clusters have very similar frequency percentages of Mortierella so I fail to see how it is an indication of plant health. Fusarium, on the other hand, was very prevalent in cluster 2, while not appearing at all in the top 30 of cluster 2 genera. This may indicate that Fusarium is a pathogenic genus, contributing to poorer plant health and, consequently, lower NDVI values. A bootstrap resampling approach helped identify 99 and 114 genera in clusters 1 and 2, respectively, as non-outlier data. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Discussion'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	For the discussion section, the researchers compared their findings to the findings of past literature and explained any possible limitations of their models. Some areas of the results that agreed with the literature are the following: NDVI is affected by the seasons, observing a maximum NDVI around harvest months. Additionally, soil carbon content has a non-linear correlation with NDVI. Furthermore, meteorological variables such as temperatures and soil moisture have the highest importance with NDVI. Lastly, the researchers acknowledge that their results confirm past results regarding their abiotic model to remove non-living influence from the NDVI response. &lt;br /&gt;
	There were several limitations to the study that were addressed by the researchers. First, their meteorological data is confined to certain ranges, such as temperature ranges between 280K and 290K. This means that their abiotic model cannot handle extreme weather well. Additionally, biological data were only obtained from 2018, so any changes associated with time were not properly observed. For future studies, using larger geospatial data and larger time periods may benefit research. Also, using satellites with high resolution may provide noteworthy results since they may be able to map biotic, living effects better. Researchers also address that testing fungal interactions both in vivo and in vitro may be useful. The example they give is with the Mortierella, and how there have been no known field experiments on how different levels of the fungus can affect the NDVI. They then give an idea of how such experiments could be carried out. Another potential candidate for future research in the field is the use of species-level taxonomy for NDVI data, since this particular paper only used genus-level taxonomy, which is broader.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:34:33Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods and Data''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first step of the methodology was to download the satellite images and apply the NDVI. Low NDVI depicts bad crop health, while high NDVI depicts good crop health. Then, the NDVI values were adjusted for abiotic (non-living) factors by removing their influence through a random forest model. Next, the NDVI values were analyzed based on different fungal soil microbiomes. &lt;br /&gt;
The abiotic data, which were used to adjust the NDVI, came from various sources. Topsoil composition data were obtained from the LUCAS 2018 topsoil dataset, but only the three most prevalent crop types were used: wheat, barley, and maize. The data was further filtered by removing instances during winter months since most of the images were covered in snow. Climate data came from the ERA5 Copernicus dataset, and information related to soil properties was obtained, such as soil temperature, soil moisture, soil type, and air temperature. These variables were then linked to each NDVI observation. This step was vital to make sure that any recorded changes in the NDVI values resulted exclusively from fungi and not any other non-living factors. If too many parameters were changing simultaneously without adjustments, no reasonable correlation could be examined. The last dataset used was the LUCAS biodiversity dataset. This dataset contains a total of 885 biosamples, of which 115 were for wheat, barley, and maize crops, during the specific time period.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:33:50Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Methods and Data''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first step of the methodology was to download the satellite images and apply the NDVI. Low NDVI depicts bad crop health, while high NDVI depicts good crop health. Then, the NDVI values were adjusted for abiotic (non-living) factors by removing their influence through a random forest model. Next, the NDVI values were analyzed based on different fungal soil microbiomes. &lt;br /&gt;
The abiotic data, which were used to adjust the NDVI, came from various sources. Topsoil composition data were obtained from the LUCAS 2018 topsoil dataset, but only the three most prevalent crop types were used: wheat, barley, and maize. The data was further filtered by removing instances during winter months since most of the images were covered in snow. Climate data came from the ERA5 Copernicus dataset, and information related to soil properties was obtained, such as soil temperature, soil moisture, soil type, and air temperature. These variables were then linked to each NDVI observation. This step was vital to make sure that any recorded changes in the NDVI values resulted exclusively from fungi and not any other non-living factors. If too many parameters were changing simultaneously without adjustments, no reasonable correlation could be examined. The last dataset used was the LUCAS biodiversity dataset. This dataset contains a total of 885 biosamples, of which 115 were for wheat, barley, and maize crops, during the specific time period.&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:20:55Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:20:29Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:20:02Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:19:44Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt; Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:19:31Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt; Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:18:32Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;Of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	<entry>
		<id>http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data</id>
		<title>Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data</title>
		<link rel="alternate" type="text/html" href="http://147.102.106.44/rs/wiki/index.php/Exploring_crop_health_and_its_associations_with_fungal_soil_microbiome_composition_using_machine_learning_applied_to_remote_sensing_data"/>
				<updated>2026-01-13T11:18:04Z</updated>
		
		<summary type="html">&lt;p&gt;Elenikaroutsos: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''' &amp;lt;span style=&amp;quot;color:#006400&amp;quot;&amp;gt; Article title: &amp;quot;Exploring crop health and its associations with fungal soil microbiome composition using machine learning applied to remote sensing data&amp;quot;''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Authors: Mathies Brinks Sørensen, David Faurdal, Giovanni Schiesaro, Emil Damgaard Jensen, Michael Krogh Jensen, Line Katrine Harder Clemmensen&lt;br /&gt;
''' &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Source:''' https://doi.org/10.1038/s43247-025-02330-0  &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Date: 7 May 2025''' &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt; '''Summary and Introduction''' &amp;lt;/u&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;text-indent: 2em;&amp;quot;&amp;gt;The goal of this study was to examine how remote sensing combined with machine learning could assist in understanding crop health. One of the largest growing concerns around the globe has been an increase in food insecurity as populations continue to rise. For this reason, it is important to apply sustainable agricultural practices in order to increase crop yields and productivity. &amp;lt;br/&amp;gt; &lt;br /&gt;
   Smart farming methods integrate remote sensing technologies such as satellite imagery and/or drones with data analytics tools to monitor fields. The common satellites in such applications are Sentinel-2 and Landsat 8, which have medium resolution but only sample every several days. The MODIS satellite is another option since it provides daily samples; however, it is low resolution and provides less accurate analyses. Multispectral images from these satellites are used to calculate vegetation indices that are used to visualize vegetation characteristics. The most common vegetation indices are the normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), and the leaf area index (LAI). &amp;lt;br/&amp;gt;&lt;br /&gt;
of all the vegetation indices, NDVI is the most popular because of the large number of factors that can affect it. Some of these include soil moisture, climate, nutrients, crop type, and temperatures. This paper conducted a literature review on how different microbial species may affect the NDVI. Prior research has proven a link between certain bacteria and NDVI values. As of now, there has been limited research linking the existence of certain fungi and NDVI values. Though the research that does exist has shown that soil decomposers play a role in higher NDVI values and that fungal richness is connected to environmental factors such as pH and landscape types. The lack of research on fungal biomes motivated this study to better examine the connection between fungal soil composition and NDVI values. &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Elenikaroutsos</name></author>	</entry>

	</feed>