Laser return number is an important parameter to perform the detection ability of a satellite laser ranging (SLR) system, which is proven to be closely related to the atmospheric transmission characteristics of laser. Accurate evaluation of the laser return number in the SLR system not only provides a theoretical basis for system design and optimization but also is a key issue and primary link in the future development of SLR automation systems. In SLR system operation, the atmospheric scattering effect, atmospheric absorption effect, and atmospheric turbulence effect continuously reduce the laser energy during atmospheric channel transmission, directly affecting the size of the average laser return number in the SLR system. The influence of the atmospheric environment on photon detection becomes increasingly evident as the detection distance further increases. To effectively evaluate the average laser return number in the SLR system and explore the relationship between laser atmospheric transmission characteristics and the detection performance of the SLR system, we should analyze the atmospheric transmission characteristics of lasers.
Lidar atmospheric correction (LAC) model based on Mie scattering theory and actual meteorological conditions is built in our study. First, based on the tilted propagation theory of laser, the entire atmosphere transmittance at different wavelengths (450, 500, 550 nm) is calculated. Then, the average laser return number per unit time of the SLR system in different meteorological conditions is calculated, and the model is validated by the actual observation results of the 60 cm SLR system at Changchun Observatory. Finally, the effects of visibility and relative humidity on the average laser return number are analyzed.
Compared with the empirical formula adopted in conventional lidar equations, the mean average relative error of atmospheric transmittance calculated using the laser slanting revise theory decreases from 14.201% to 5.992%, which is about an order of magnitude smaller (Fig. 2 and Table 1). The calculated average laser return number per unit time of SLR system based on the LAC model exhibits good consistency with the measured data, with an average relative error of less than 15% (Fig. 4 and Table 2). The average laser return number received by the SLR system is proportional to visibility and inversely proportional to relative humidity (Figs. 5 and 6). When the elevation angle of the telescope is less than 15°, the influence of visibility and relative humidity on the average laser return number is not significantly different. When the elevation angle of the telescope is greater than 15°, the influence of visibility is slightly greater than that of relative humidity, and reaches its peak around 60° (Fig. 7). Additionally, we also find that due to the temperate continental climate of Changchun Observatory, there are significant seasonal variations in the average laser return number per unit time received by the SLR system (Fig. 8).
Average laser return number in SLR system is an important parameter characterizing the detection ability of the system, which is closely related to the atmospheric transmission characteristics of lasers. Based on Mie scattering theory and the actual distribution of aerosol particles, the LAC model is proposed and employed to calculate the average laser return number in the SLR system. By taking the 60 cm SLR system at Changchun Observatory as an example, the effect of climate conditions on the average laser return number in the SLR system is analyzed. The results indicate that the average laser return number in SLR system increases with the rising visibility near the surface and decreases with the increasing relative humidity. When the elevation angle of the telescope is greater than 15°, the influence of visibility is greater than that of relative humidity, and their influence reaches its peak around 60°. Our study not only elucidates the inherent mechanism by which climate conditions affect the detection performance of SLR system but also provides new theoretical solutions and technical support for SLR system site selection and performance evaluation.
.As underwater military activities and scientific research are increasingly frequent, the demand for high-speed, high-quality, and high-bandwidth underwater communication has become urgent. However, laser communication effectiveness in seawater is hampered by the scattering, absorption, and turbulence effects, which causes degraded beam quality and increased communication error rates. Consequently, it is of application significance to study the beam quality degradation characteristics of blue-green lasers in seawater. However, laser propagation calculation in seawater turbulence is quite complex and time-consuming. Therefore, it is of significance to establish a beam expansion calibration formula, especially for blue-green laser propagation in seawater turbulence. This scaling law will enable the rapid prediction and evaluation of beam expansion influence and patterns.
First, we build a rigorous physical model to comprehend the propagation of blue-green lasers in seawater turbulence. By adopting the power spectrum inversion method, phase screens of seawater turbulence are generated to enable the numerical calculation of beam expansion variation, with both seawater parameters and laser parameters considered. Second, the β factor is employed to evaluate the energy concentration of laser beams on the target plane and thus revealing the beam expansion of lasers by seawater turbulence. Finally, a beam expansion calibration formula for blue-green lasers propagating in seawater turbulence is proposed via the processing method of the mean square sum.
The estimation results of the scaling law obtained by fitting are compared with those of numerical calculation. The results show that the scaling law matches well with the numerical calculation under certain laser and seawater turbulence parameters. This is under the scenario that the laser parameters fall within the ranges of 0.001-0.100 m for the beam waist radius, 1.0-4.0 for the initial beam quality factor, 470-550 nm for the wavelength, and -5.0--0.5 for the value range of temperature-induced seawater turbulence to salinity-induced seawater turbulence. Additionally, the seawater turbulence parameters are kinetic energy dissipation rate of 10-10-10-1 m2/s3, and 10-10-10-4 K2/s for the dissipation rate of temperature difference. After imposing this limitation, for the total beam expansion, the maximum error between the beam expansion estimated by the scaling law and the numerical calculation results is 10.90%, with a maximum average error of 4.70%. Consequently, the scaling law can accurately predict the beam expansion laws of Gaussian beams propagating in seawater turbulence.
To rapidly and accurately predict the beam expansion law of Gaussian beams propagating in seawater turbulence, we first analyze the variation of beam expansion with laser and seawater parameters. Subsequently, the scaling law for beam expansion of blue-green lasers in seawater turbulence is proposed. On this basis, the coefficients in the scaling law are determined by employing the least squares method. The scaling law is then utilized to estimate the errors between the beam expansion estimated by the scaling law and the numerical calculation results under different parameters. The results show that within the specified parameter range, the error between the estimated beam expansion law by the scaling law and the numerical calculation results is within 5%.
.Phase state recognition of cloud particles is an important content in cloud physics research and also significant for inverting other cloud microphysical parameters. With the development of remote sensing detection technology, researchers have developed various recognition methods of cloud phase particles, such as decision tree recognition, classic statistical decision recognition, neural networks, clustering algorithms, and fuzzy logic algorithms. However, due to the complex characteristics of cloud particles, the radar information corresponding to different particles does not have absolute features, and there may be some overlap degree. Thus, recognition algorithms based on rigid threshold conditions are not well suitable for phase recognition and classification of cloud particles. Fortunately, the fuzzy logic recognition algorithm can improve this rigid threshold defect, but the accuracy of the T-function coefficients in fuzzy logic will directly determine the accuracy of the recognition results. To accurately and finely identify cloud phase states, we propose an optimization algorithm based on fuzzy logic to recognize the phase states of cloud particles. The optimized fuzzy logic algorithm can also recognize supercooled water and warm cloud droplets compared to the original fuzzy logic algorithm which can only recognize ice crystals, snow, mixed phases, liquid cloud droplets, drizzle, and raindrops.
Based on the induction and summary of a large number of aircraft and remote sensing instruments simultaneously observed data and comprehensive characteristic consideration of different cloud types, we adjust and optimize the T-function coefficients of fuzzy logic. A table of T-function coefficient parameters for different cloud phase particles is constructed as shown in Table 2. The corrected reflectivity factor, radial velocity, and spectral width detected by millimeter wave cloud radars with high spatiotemporal resolution, as well as the temperature detected by microwave radiometer, are adopted as input parameters for the optimized fuzzy logic algorithm. According to the phase recognition process of cloud particles shown in Fig. 1, snow, ice, mixed phase, supercooled water, warm cloud droplets, drizzle, and rain in cloud particles can be identified.
The cloud particle phase of a snowfall observed on 6 February 2022 in Xi'an is inverted to verify the effectiveness and accuracy of the optimized algorithm. Additionally, we input the parameters (corrected reflectivity factor, radial velocity, spectral width, and temperature) that can characterize the features of cloud particles in Fig. 3 into the optimized fuzzy logic algorithm, and obtain the phase recognition results of cloud particles shown in Fig. 5. The cloud phase distribution in Fig. 5 (near the ground area, at a height of about 200 m) is highly consistent with the particle phase changes recorded by the ground precipitation phenomenon meter. Meanwhile, we also compare the recognition results of the optimized fuzzy logic algorithm (Fig. 5) with the original fuzzy logic algorithm (Fig. 4) and find that the optimized algorithm can identify supercooled water that cannot be recognized by the original algorithm, which is beneficial for explaining the particle phase transformation process and precipitation mechanism research in clouds.
We propose an optimized fuzzy logic algorithm by optimizing the asymmetric T-function coefficients and considering the effects of reflectivity factor attenuation and temperature on the accuracy of recognition results. The corrected reflectivity factor, radial velocity, spectral width, and spatiotemporal continuous temperature detected by the microwave radiometer are leveraged as input parameters for the optimized fuzzy logic algorithm. The optimized algorithm can accurately identify snow, ice, mixed phase, supercooled water, warm cloud droplets, drizzle, and rain particles in clouds, which would help study and invert cloud microscopic parameters.
.Filament refers to a plasma channel with high laser intensity and high plasma density formed by the propagation of intense femtosecond laser pulses in a transparent medium. Several literatures have shown that the cross-section image of an optical filament at a specific z usually contains abundant structural information such as filament diameter, length, and energy distribution, which is of great significance for the visualization study of the dynamic process of filament formation. Moreover, accurate acquisition of the spatial structure and energy deposition distribution of femtosecond optical filaments are also of great significance for the development of filamentation-based atmospheric applications. Nevertheless, it is also the inherent parameter most difficult to measure directly. To solve the problem, we introduce a new medical imaging method named photoacoustic tomography (PAT) for optical filament cross-section imaging. The feasibility of reconstructing monofilament and multifilament images by photoacoustic tomography is verified theoretically. Moreover, we also study the influence of the performance parameters of the ultrasonic transducers on the optical filament image reconstruction.
We adopt a forward simulation model based on the photoacoustic wave equation to simulate the acquisition process of ultrasonic signals induced by optical filaments in air. A circular-scanning-based PAT system is considered to obtain the cross-section image of the laser filament. To simplify the problem, we assume that the initial heat source distribution of the optical filament satisfies the Gaussian distribution form, which can represent both the small high-energy core of the optical filament and its weak background energy region with a larger range. Based on experimental measurements, the initial maximum energy deposition density is assumed to be in the order of 10 mJ/cm3, and the diameter of the heat source is assumed to be in the order of 100 μm. The simulated time series of the acoustic signal is then applied to reconstruct the transverse distribution of femtosecond laser filaments with delay and sum (DAS) algorithm. Moreover, we also analyze the influence of performance parameters of ultrasonic transducers such as center frequency, bandwidth, surface size, and detection surface sensitivity on the reconstruction of filament cross-sectional images. The back-projection amplitude distribution profile along the y-axis is leveraged to compare the effect of image reconstruction.
According to the time series of ultrasound signals generated by monofilaments and multifilaments recorded at different detection distances, the frequency of monofilament and multifilament induced by femtosecond laser with multi-millijoule pulse energy is mainly concentrated within 4 MHz (Fig. 2). The signal spectrum of monofilament is single-peak structure, while the acoustic signal spectrum of multifilament is multi-peak structure (Fig. 2). The amplitude value of sound pressure signal decreases rapidly due to the attenuation of air. As the center of the optical filament deviates further from the scanning center, the cross-section image of the optical filament reconstructed by the back-projection (BP) algorithm and the DAS algorithm appears an obvious "elongated" phenomenon in the tangential direction (y-axis), which is the so-called "finite aperture effect" (Fig. 3). For monofilaments, the maximum energy amplitude decreases significantly with the increase in the center frequency of the transducer, which may be related to the filtering out of more low-frequency signals (Fig. 4). The same method is adopted to reconstruct the image of multifilament. It is found that the reconstructed multifilament image appears serious deformation with the multifilament center position deviating from the scanning center (Fig. 5). When x0=1.0 mm, the two monofilaments near the scanning origin side can still be distinguished, whereas the two monofilaments near the transducer side are fused and cannot be distinguished. Therefore, the secondary filaments around the multiple filaments are more susceptible to the "aperture effect" and the fuzzy deformation occurs. The fuzzy deformation effect will be more obvious when the distance becomes larger from the scanning center or the distance becomes smaller from the surface of the transducer. Therefore, compared with monofilament reconstruction, multi-filament image reconstruction is more affected by the "aperture effect". Especially, the blur deformation of the surrounding sub-filaments is more likely. In summary, the characteristics of the transducer have an obvious influence on the reconstruction of monofilament and multifilament cross-sectional images. A larger bandwidth of the transducer will cause a smaller surface diameter, a larger surface sensitivity parameter, and a better reconstruction quality of monofilament and multifilament images. The influence of the center frequency of the transducer on the optical fiber image reconstruction is very complicated. Therefore, it is necessary to select the transducer with the appropriate center frequency combined with the spectrum analysis of the acoustic signal in the actual measurement.
We utilize a novel medical imaging method named PAT to reconstruct cross-section images of femtosecond laser filament formed in an air medium. The results show that the acoustic signal induced by a single filament has a single-peak structure, while that induced by a multifilament has a multi-peak structure. The performance parameters of the transducer have an obvious influence on the reconstruction results. A larger bandwidth of the transducer will lead to a smaller surface diameter, a larger surface sensitivity coefficient, and a better reconstruction effect of energy deposition distribution of optical filament. Compared with monofilament, the reconstruction of the multifilament image is more susceptible to the "finite aperture effect". Our study can provide some theoretical support for the experimental measurement of the spatial deposited energy distribution of femtosecond laser filament transmission under real atmospheric conditions.
.Microwave photonic technology has an important potential in future high-speed microwave/millimeter-wave communication systems due to its large bandwidth, low loss, and immunity to electromagnetic interference. However, due to the inherent cosine response of the electro-optic modulators, the output signals of the broadband multi-carrier microwave photonic link (MPL) will suffer from nonlinear distortions, mainly including harmonic distortions (HD), cross-modulation distortion (XMD), and third-order intermodulation distortion (IMD3). Since HD can be filtered out by a suitable filter, the XMD and IMD3 are the main factors limiting the system performance. We build a nonlinear distortion model for in-band third-order IMD3 and out-of-band XMD compensation of a broadband MPL. Despite various optical and electrical methods are proposed to compensate for the IMD3, few methods can quickly compensate for both XMD and IMD3 of a broadband MPL spontaneously. Thus, a nonlinear distortion model is presented for compensating the in-band IMD3 and out-of-band XMD in the wideband MPL. This method does not require priori parameters of the system and signals, and a complicated training and iterative optimization process, which is more practical.
We provide a nonlinear distortion model for a broadband multi-carrier MPL. Firstly, due to large frequency differences between the HD signal and the fundamental frequency signal, the HD signal can be easily filtered by a digital filter. Then, the XMD and IMD3 signals are extracted, which are the opposite sign to the fundamental frequency signal. Thus, it is easy to obtain that the cubic power of the XMD and IMD3 signals is also the opposite sign of the fundamental frequency signal. Based on the characteristic, a cost function with a closed-form solution can be constructed, where an optimal linearization coefficient is obtained quickly and adaptively. Finally, this optimal linearization coefficient is introduced to compensate the XMD and IMD3 simultaneously in the digital domain.
Simulation experiments are built to verify the performance of XMD and IMD3 suppression. Figure 2 shows the signal spectra before and after linearization as two-tone signals are received. The XMD and IMD3 are suppressed by more than 35 dB and 29 dB respectively. The power of the fundamental frequency signal is found to remain unchanged, but the power of the XMD term increases linearly with the slope change of 2 (Fig. 3). Additionally, after compensation by the proposed algorithm, all the XMDs are suppressed below the noise and the compensation effect does not decrease with the increasing input fundamental signal power. As the power of the input fundamental signal increases, the powers of the fundamental signal and the IMD3 signal of the pre-compensation in-band signal rise linearly with slopes of 1 and 3 respectively. Meanwhile, the power of the XMD term after linearization increases linearly at a slope of 5. The spurious-free dynamic range of the compensated system is improved by more than 21.5 dB (Fig. 4). According to the simulation experiment, after algorithmic compensation, the error vector magnitudes (EVMs) of single-carrier orthogonal frequency division multiplexed signal (OFDM) and multi-carrier OFDM signals are optimized by 6.1% and 5.9% respectively (Figs. 6 and 7). As multi-carrier OFDM signals with different Vpp are input (Fig. 8), the best compensation effect is at 1 V, and the EVM is optimized by 7.2%.
A nonlinear distortion model is presented for the XMD and IMD3 generated in a broadband multi-carrier MPL. Then based on the characteristic that the XMD and IMD3 signals have the opposite sign to that of the fundamental frequency signals, the out-of-band XMD and the in-band IMD3 can be suppressed. Compared with the traditional XMD and IMD3 compensation methods, this method does not require priori parameters of the system and signals, and a complicated training and iterative optimization process. Simulation results show that the XMD and IMD3 are suppressed by more than 35 dB and 29 dB respectively, and the spurious-free dynamic range is improved by about 22 dB as the multi-tone signal is transmitted. When a multi-carrier OFDM signal is transmitted, the EVM of the signal is optimized from 8.1% to 2.2%.
.Due to the influence of the external environment and system aging, the radiation characteristics of the camera will change after launch. It is of great importance to carry out on-orbit radiometric calibration, which converts the image grayscale value of the sensor response into spectral radiance or top of atmosphere reflectance, for remote sensing data quantitative application. Common methods of on-orbit radiometric calibration can be divided into four categories: on-board calibration, site calibration, cross calibration, and scene calibration. As a long-term stable natural celestial body in the universe, the moon has very high surface reflectivity stability. It can be used as a calibration source to avoid interference from complex atmospheres and as a supplement to the on-board calibration. At present, the internationally representative lunar radiation models are the Robotic Lunar Observatory (ROLO) model and the Miller-Turner 2009 (MT2009) model. The spectral coverage of the ROLO model used in this study is 300-2550 nm, and the model uncertainty is 5%-10%. Although the ROLO model has larger uncertainty than the site calibration or on-board calibration, its relative stability can reach 1%-2%, which can be used as a normalized reference to monitor the attenuation of sensors. Many scholars use the lunar irradiance model as the basis to carry out radiometric calibration or monitor the stability of the satellite sensor by comparing the data of different months and different moon phases. However, these studies only focus on multi-temporal tracking of the on-orbit radiation performance of sensors and do not consider the consistency correction of radiation performance between different sensors and different spectral bands. In the present study, we propose a radiation consistency correction method based on lunar calibration. We hope that method can help the inconsistent radiation response of dual cameras installed on the Jilin-1 GP satellite.
We propose a radiation consistency correction method for dual cameras equipped with Jilin-1 GP satellite by lunar calibration based on the stable radiation response characteristics of the moon. Firstly, the lunar imaging data of the two cameras are obtained successively by adjusting the satellite attitude. Then, the lunar spectral irradiances of different spectral channels of the two sensors are calculated based on the image data. The calculation results are compared with the ROLO lunar irradiance model and the spectral band with small irradiance change and close irradiance response of the two cameras is selected as the reference band. At last, the ratio irradiance of each band to the reference band is calculated to correct the attenuation of each band, to achieve dual-camera radiometric consistency correction of Jilin-1 GP satellite.
The correction value of the absolute radiometric calibration coefficient of each spectral band indicates that after the satellite has been on orbit for a period of time, certain fluctuations have occurred in each band, and some spectral bands even have an attenuation of more than 30% (Table 3). Four sets of data from different imaging scene types are selected for testing, of which the red, green, and blue spectral bands are combined into true color images. Visually, the corrected dual-camera images have better color consistency (Figs.7 and 8). Relative average spectral error (RASE) and relative global dimensional synthesis error(ERGAS) are adopted to evaluate the spectral consistency of the entire image and lap region imaged by both cameras before and after correction. Compared with the calculation results of the indicators before and after the correction, the calculation results of RASE and ERGAS between the two cameras after the correction are better than those before the correction, whether it is the entire area or the overlapping area (Table 5). Experimental results show that our dual-camera radiometric consistency correction method significantly improves the radiometric consistency, especially in the overlapping area.
In the present study, based on the imaging data of the simultaneous observation of the moon by two cameras of the Jilin-1 GP02 satellite, we propose a dual-camera radiation consistency correction method based on lunar calibration. Firstly, based on acquired observation data of the moon, the consistency of the single spectral band radiation reference of the two cameras is determined by selecting the spectral band with the closest lunar irradiance results of the two cameras as the benchmark for the correction between the respective bands. Furthermore, the relative relationship between each spectral band in the ROLO model is used as a reference, and all bands of the two cameras are corrected relatively to realize the consistency of the remaining spectral bands of the dual cameras to the consistency of the reference band. The test results show that some spectral bands of the Jilin-1 GP02 satellite have obvious attenuation. After compensating for the attenuation, the visual effect of the true color images taken by the two cameras of the Jilin-1 GP02 satellite is more consistent, and the relative average spectral error and the relative adimensional global error in synthesis of the overlapping area of the dual camera are also significantly smaller.
.Synthetic aperture radar (SAR) is a microwave imaging radar that utilizes the principle of synthetic aperture to achieve high resolution. It has various characteristics such as all day, all weather, high resolution, and wide bandwidth. It is not affected by weather, day, and night and can obtain high-quality, high-resolution, large-scale, and long-distance images. SAR ship target detection technology can provide important technical support in industries such as ocean, oil, port management, marine resource development, and marine scientific research, as it can detect ships and equipments on the sea and detect potential safety risks in advance. At the same time, ship target detection technology has important strategic significance for strengthening maritime monitoring, border patrol, maritime rescue, and safety assurance of maritime channels. We aim to improve the accuracy of SAR ship detection, reduce false positives, and enhance the adaptability of the model.
Traditional SAR image target detection methods include texture analysis, polarization characteristics, and constant false alarm rate (CFAR) algorithms. Among them, the most widely used is the CFAR detection algorithm, which has certain advantages in speed, but its drawbacks are high computational complexity and susceptibility to complex backgrounds, resulting in unsatisfactory detection efficiency. In the actual SAR imaging process, the backgrounds of SAR images are mostly ports, islands, reefs, and other buildings. These backgrounds have high grayscale characteristics and strong confusion. Therefore, for the detection of ship targets on the sea, multiple complex backgrounds, various irregular arrangements of ships, similar target misdetection, and other uncertain factors should be considered. The target features of uncertain factors have a certain degree of similarity with ships. Therefore, we propose an efficient aggregation feature enhancement network (EAFENet) to solve the problems of low accuracy, serious false detections, and unstable effects in current SAR ship target detection. The core idea is to efficiently aggregate stacking modules and introduce residual structures to effectively transmit gradient and feature information and alleviate the problem of gradient vanishing and feature loss. The combination of the CBS (convolution+batch normalization+SiLU) module, CBAM (channel spatial attention mechanism), and leaky ReLU activation function increases the sensitivity of the network to target features and introduces low-dimensional feature fusion. Through multi-layer feature pyramid connections, the expression of features is further extended and enhanced, and the residual idea is used for skip connections, enhancing the learning ability and generalization of the model.
In this article, qualitative and quantitative experiments are conducted on EAFENet and other mainstream models for detecting SAR ships, as well as ablation experimental analysis. To demonstrate the effectiveness of each improvement point in this article, the YOLOv7 network model is used as a benchmark, and six sets of experiments are conducted on the SSDD dataset, with the same environment and parameters. The detected images include multiple targets, few targets, and complex backgrounds. As shown in Table 3, the effect is not ideal only when attention is used alone, and the effect is significantly improved when the mentioned EL-CB (efficient layer convolutional block) is used. The proposed global enhanced feature pyramid network branch structure is used to improve the performance of the feature pyramid and enhance the fusion of shallow features. The accuracy is improved by nearly three percentage points; the recall rate and mAP0.50∶0.95 are both improved by nearly 10 percentage points, and mAP0.5 is improved by 6.4 percentage points, proving the effectiveness of each module in this article. In order to further compare the performance of the proposed model, the improved algorithm is used for comparative experiments with the current mainstream algorithms. The experimental environment is the same, and the same training and testing sets are used. The indicators of Faster R-CNN, SSD, YOLOv5, YOLOv7, CenterNet, and our algorithm are shown in Table 4. In terms of accuracy, the EAFENet model is more prominent than other mainstream algorithms. EAFENet performs the best with an accuracy of 95.40%, followed by YOLOv5 and YOLOv7, with an accuracy of 93.32% and 92.90%, respectively. The accuracy of SSD and Faster R-CNN is 84.10% and 82.70%, respectively. Compared with other algorithms, EAFENet uses a more efficient feature extraction module, which to some extent reduces misjudgment. However, mainstream algorithms such as SSD have relatively weak designs in feature extraction and other aspects, as well as a lack of deeper fusion of shallow features in the feature fusion process, resulting in relatively inaccurate prediction results. Therefore, when considering the mAP value, EAFENet still performs the best, reaching 98.90% of mAP0.5, followed by YOLOv5 and YOLOv7, reaching 94.25% and 92.50%, respectively. The mAP0.5 of SSD and Faster R-CNN is 86.01% and 89.17%, respectively. However, the proposed algorithm has undergone deeper fusion in the network structure, resulting in a slight decrease in FPS (frame per second). Overall, compared with other classic algorithms, the proposed algorithm still has significant advantages in speed, and the greatly reduced false detection rate can meet the basic needs of real-time detection.
In response to the problems of low accuracy and high false detection rate in SAR ship detection, we propose a SAR ship detection method based on an EAFENet. An EL-CB is constructed through spatial channel attention as the feature extraction module of the backbone network, and Inception NeXt is used as the feature extraction part of neck to improve algorithm efficiency, enabling the network model to better understand multi-scale information with detail perception ability. In the network structure, a global enhanced feature pyramid branch structure is constructed by fusing deep-level features with low-level features. This enables the feature extraction network to simultaneously consider both low-level and deep-level information, effectively enhancing the ability to obtain features and ensuring better stability for ship detection in complex backgrounds. The experimental results show that compared with various current detection algorithms, the proposed algorithm has higher detection accuracy and can meet the needs of real-time detection. In future research, the network structure will be further optimized to improve detection accuracy and efficiency.
.More than 50% of atmospheric water vapor exists mainly in the lower atmosphere within 2 km. Vibrational Raman scattering lidar is an important remote sensing tool for atmospheric water vapor measurement. However, the traditional vibrational Raman scattering lidar mainly adopts a coaxial and non-coaxial parallel transceiver system structure, and the system detection blind zone and transition zone limit their effectiveness in ground atmospheric water vapor detection. We propose a novel detection technique of lateral vibrational Raman scattering lidars based on the structure of a bistatic system, where the lateral vibrational Raman scattering signals of N2 and H2O at different heights are detected by the elevation angle scanning of the lateral receiver system. Finally, it realizes fine detection of near-surface atmospheric water vapor without a blind zone from the ground to the height of interest.
We study the lateral vibrational Raman scattering lidar technique in the application of accurate measurements of atmospheric water vapor from the ground to the height of interest. First, a novel lateral scanning vibrational Raman scattering lidar technique is proposed and designed. Two telescopes combined with specified narrow-band interference filters are utilized to detect the lateral scattering signals of the vibrational Raman scattering spectra of N2 and H2O respectively. Then, the inversion algorithm of atmospheric water vapor using the lateral vibrational Raman scattering lidar is established. Vibrational Raman scattering spectra of N2 and H2O have large wavelength differences, which lead to large differences between atmospheric transmissivity of the slant path in these two detection channels, and the aerosol extinction coefficients inverted by Raman method are adopted to correct atmospheric transmissivity of the slant path and improve the detection accuracy of the atmospheric water vapor mixing ratio. Finally, the construction of the experimental system is completed, and the preliminary experiments are conducted via the lateral scanning vibrational Raman scattering lidar. Two different rotation schemes including the continuous equidistant resolution and segmented equidistant resolution are employed during the experimental observations.
The detection principle of the lateral vibrational Raman scattering lidar is innovatively proposed. It breaks through the traditional backward vibrational Raman scattering lidar by a monostatic transceiver system structure, which produces the blind zone and transition zone without effective detection of near-surface atmospheric water vapor. Meanwhile, this technology can utilize a continuous-wave laser featuring light weight, portability, mobility, and low cost (Fig. 1). Data correction of atmospheric water vapor is realized by analyzing the atmospheric molecular scattering phase function and the difference in slant path atmospheric transmissivity caused by the wavelength difference between the vibrational Raman scattering spectra of N2 and H2O. The aerosol extinction coefficient obtained from the inversion of the lateral N2 vibrational Raman scattering signal is employed for real-time correction of the slant path atmospheric transmissivity, which improves the accuracy of atmospheric water vapor mixing ratio detection (Figs. 2-4). Preliminary experimental observational studies of a lateral scanning pure rotational Raman scattering lidar are performed by two different rotation schemes including the continuous equidistant resolution and segmented equidistant resolution, which are employed during the experimental observations. The experimental results show that both rotation schemes can realize atmospheric water vapor detection from the ground to the height of interest. In particular, the segmented equidistant resolution scheme can realize more fine detection of atmospheric water vapor distribution in the ground zone (Figs. 5-8).
We focus on the detection demand for atmospheric water vapor from the ground to the height of interest using the lidar technique. Based on the theoretical basis of vibrational Raman scattering, the innovative technology of lateral scanning Raman scattering lidar for detecting atmospheric water vapor at the ground surface is proposed. This technology combines the elevation angle scanning function of the lateral receiver system to achieve non-blind scanning detection of water vapor in the lower atmosphere. Due to large differences between the wavelengths of the vibrational Raman scattering spectra of N2 and H2O, the aerosol extinction coefficients obtained by inverting the lateral N2 vibrational Raman scattering signals are adopted to make real-time corrections to the slant path atmospheric transmissivity, which improve the accuracy of atmospheric water vapor mixing ratio. If a high-power pulsed laser is applied, it can be simultaneously observed with a backward vibrational Raman scattering lidar to construct a joint detection system to realize the measurement of atmospheric water vapor from the ground to the height of interest. The experimental results show that the lateral vibrational Raman scattering lidar can detect atmospheric water vapor mixing ratios up to 1400 m with a horizontal distance of 60 m between the laser transmitter system and the lateral telescope receiver system. Additionally, the segmented equidistant resolution scheme has variable resolutions at different heights to show more details of water vapor distribution in the ground zone.
.The GaoFen-5B (GF-5B) satellite launched on September 7, 2021 can achieve comprehensive atmosphere and land observation. The visual and infrared multispectral sensor (VIMS) of the GF-5B satellite can obtain imagery data in 12 spectral bands from visible light to long wavelength infrared. With the advantages of a high signal-to-noise ratio and the ability of day and night observation, the imagery of visual and infrared multispectral sensors is widely applied to land degradation monitoring, crop growth analysis, and thermal pollution detection. GF-5B is equipped with three star sensors as the attitude measurement system to achieve high-precision attitude determination and geometric positioning. Among these star sensors, star sensors 2 and 3 have better measurement accuracy and stability performance and are often employed as conventional attitude determination modes to calculate satellite attitude parameters. However, owing to factors such as sunlight exposure and insufficient star number, there are only star sensors 1 and 2 or star sensors 1 and 3 working simultaneously to determine the satellite attitude parameters, which are named unconventional attitude determination modes. Due to the spatial thermal environment changes of satellites, the body structure and installation structure of the attitude measurement load undergo thermoelastic deformation, which causes attitude low frequency error related to the satellite orbit period. This seriously affects the consistency of attitude determination results between conventional and unconventional attitude determination modes and the stability of the geometric positioning accuracy of the image without ground control points. Therefore, we propose an improvement method of geometric positioning accuracy for visual and infrared multispectral imagery based on spatiotemporal compensation of attitude low frequency error.
Based on the optical axis angle of star sensors, the spatiotemporal characteristics of low frequency error of star sensors are analyzed in 181 d for the GF-5B satellite. The median filtering denoising processing with the sliding window is applied to separate the low frequency error and the random error between conventional and unconventional attitude determination modes. Then, due to the complex local spatial locations, the attitude low frequency error between conventional and unconventional attitude determination modes is segmented based on satellite latitude position information. According to the spatial characteristics of attitude low frequency error, the low frequency error between conventional and unconventional attitude determination modes is calibrated in each position interval using the Fourier series model with the input parameter of satellite position latitude. For solving the drift problem of attitude low frequency error over time, we propose the sequential temporal models of low frequency error to ensure high-precision low frequency error compensation. In the attitude low frequency error compensation, the compensation model of attitude low frequency error of the unconventional attitude determination mode is selected among the sequential temporal models with the input parameter of sampling time. Then, the compensation parameter of attitude low frequency error is calculated using the Fourier series model with the input parameter of latitude position.
Employing the experimental data of visual and infrared multispectral sensors, we analyze the calibration accuracy of attitude low frequency error, compensation accuracy of attitude low frequency error, and geometric positioning accuracy of visual and infrared images. For the calibration accuracy of attitude low frequency error, the model errors along yaw angle, roll angle, and pitch angle calibrated by the proposed method are 0.178
To improve the geometric positioning accuracy of the visual and infrared multispectral sensor of the GF-5B satellite, we put forward an attitude low frequency error compensation method based on the spatiotemporal characteristics. The spatiotemporal characteristics of attitude low frequency error within 181 d are comprehensively analyzed, and then a compensation strategy with time sequence and multi-spatial models is proposed. Additionally, we execute slowequential calibration with certain time intervals to eliminate the drift problem of low frequency error over time and build a compensation model with the input parameter of latitude position to compensate for the spatial differences of low frequency error. The low frequency error characteristics between conventional and unconventional attitude determination modes are unified by the proposed method. This method improves the geometric positioning accuracy of visual and infrared multispectral sensors of the GF-5B satellite with different imaging time and imaging areas.
.The output of the FY-3B satellite's medium resolution spectral imager (MERSI) visible on-board calibration (VOC) degrades with time, which causes concerns regarding its reliability in absolute radiometric calibration. The users must distinguish between the uncertainties determined by the VOC system's radiometric output and the MERSI detectors since this will lead to a detailed temporal evolution comprehension of the MERSI system and VOC radiometric characteristics. Additionally, this can ensure the remote sensing data are fully calibrated and utilized in the observed target study. We aim to investigate the output variation of the MERSI VOC system and have made special efforts to extract the information variations of VOC radiometric performance. The annual degradation rates which are defined as the percentage difference between the results of the first and last measurements of each year are employed to evaluate the VOC radiometric performance. The results are evaluated against the trap detector monitoring to further validate the proposed proceeding approach.
Based on the characteristics of the satellite orbit and the structures of the MERSI VOC, we introduce a novel methodology to assess changes in the VOC system's radiometric output, with a particular focus on analyzing the relationship between the sunlight calibration opportunities and the angles of solar zenith and solar azimuth. Then we screen out the sunlight-based calibration data from multiple light sources (interior lamps, sunlight, space view background) calibration data. The analysis is to provide perspectives on the comparative radiometric performance of MERSI. Meanwhile, the majority band response seems to follow a somewhat downward trend. Subsequently, the performed relative response characterization step employs an exponential function created via least-squares fitting of the VOC data. High-quality MODIS data are leveraged to develop a top-of-atmosphere (TOA) bidirectional reflectance distribution function (BRDF) model and thus enhance the study precision. The time series of TOA reflectance acquired by BRDF model fitting is compared with that measured by MODIS, with the time series of BRDF modeling residuals analyzed. This model is consequently utilized in cross-calibration processing with nearly 10 years' worth of on-orbit data from MERSI. Cross-calibration processes include spectral matching between the two distinct sensors, viewing geometry correcting, and spectral interpolation. Additionally, the TOA reflectance is further converted to calibration coefficients using a calibration equation with zenith angle, azimuth angle, digital counts, and earth-sun distance. This is to comprehensively evaluate MERSI's absolute radiometric performance, and the relative and absolute radiometric characteristics of MERSI are standardized based on the initial regression point. This standardization treats the normalized difference as an indicator of the decay in VOC radiometric performance.
Recent studies using analysis of the MERSI sensor response to the Libya4 pseudo-invariant site and cross-calibration with MODIS show that the FY-3B MERSI has not deteriorated as much as the sunlight-based calibration trend has suggested. The comparison of the above lifetime trends and the relative and absolute radiometric characteristics of MERSI produce a distinction estimate in the calibrations of consecutive FY-3B MERSI pairs. We conclude that the degradation effects of the VOC radiometric performance can explain the observed differences. The results illustrate that the degradation rates of VOC radiometric performance are wavelength-dependent, with an initially higher rate gradually decreasing over the years and eventually stabilizing. Notably, in the early mission stages, the shortwave outputs (below 500 nm) exhibit a substantial degradation, reaching up to 49.51%. Conversely, the decay rates at longer wavelengths (800-1000 nm) are relatively modest, remaining within 26%. In the later stages of the satellite's mission life, the decay rates for most wavelengths are approximately 0.64%, except for 412 nm, which experiences a higher rate at approximately 1.91%. For further validating the employed proceeding approach, we make a comparison of the decay in VOC radiometric performance calculated by us with that monitored by the trap detector. Since we cannot determine how the data amount that passes through the filter changes while in orbit, the radiometric performance of VOC is all normalized by the first measurement value. The results indicate that the maximum percent differences observed throughout the instrument's lifetime remain below 15% at 470 nm and 14% at 65 nm.
A general procedure is developed and implemented to provide users with the ability to characterize the decay rate in the VOC system's radiometric output. The results demonstrate that the maximum annual decay rates (ADRs) of the short-wave output (<500 nm) range from 46%-50%, while the longer wavelengths (800-10000 nm) reveal relatively smaller changes of approximately 26%. The current procedure implementation leads to further comprehension of changes in the VOC system output. The adopted novel methodology serves as a valuable reference for extending analogous endeavors that aim at conducting on-orbit absolute radiometric calibration for other sensors.
.The infrared Fourier spectrometer is based on interferometric spectroscopy, which features high spectral resolution and high sensitivity. Due to the ultra-fine spectral resolution of infrared hyperspectral atmospheric detection instruments, minor errors in spectral calibration can cause radiation measurement errors. High precision spectral calibration is an important prerequisite for quantitative inversion and the application of infrared remote sensing. The spectral calibration accuracy is affected by the limited field of view and off-axis effect. The traditional method is to obtain the spectral calibration coefficient by fitting multiple spectral lines. However, for issues such as ultra-high spectral resolution and wide observation spectral bands, most spaceborne infrared hyperspectral instruments employ forward modeling methods to build instrument spectral response models and remove various spectral effects.
Based on the optical field of view characteristics, we conduct spectral simulations of limited field of view and off-axis effect, and study spectral correction methods for the plane array Fourier spectrometer. Firstly, the influence of instrument line shape function (ILS) is analyzed to determine the analysis methods for different influencing factors (such as finite optical path difference, finite field of view, and off-axis effect). Next, by taking a planar circular detector as an example, the ILS function is constructed by combining the optical characteristics of the instrument itself. Then, the spectral calibration error and spectral sensitivity caused by the off-axis effect are simulated by gas absorption spectroscopy. Finally, test data of the optical field of view is obtained via slit scanning. Based on the pre-launch spectral calibration data of FY-3F/HIRAS-Ⅱ, spectral correction and calibration accuracy verification are carried out.
The experimental results indicate that the limited field of view and off-axis effect cause the spectrum to broaden and shift to a low wavenumber direction. There is a quadratic relationship between the off-axis angle θrc and the pixel field of view angle θR and the spectral calibration accuracy. The off-axis angle is more sensitive, and its contribution to the spectral calibration accuracy is much greater than that to the pixel field of view angle. When θR =60′, the error caused by measurement accuracy of 2′ is approximately 1.3×10-6. When θrc =101.82′(-72′, 72′), the error caused by the measurement accuracy of 2′ in a certain direction is about 12×10-6. After spectral calibration and correction, the spectral calibration accuracy of the center worst pixel decreases from -24.69×10-6 to 0.54×10-6, and the edge worst pixel reduces from -513.38×10-6 to -0.15×10-6. All pixels in the three bands meet the indicator requirement of less than 7×10-6.
Based on the characteristics of the infrared hyperspectral atmospheric detector for the FY-3F satellite, the ILS function and spectral comprehensive effect matrix are constructed. The sensitivity analysis is conducted on the spectral calibration accuracy. By adopting the simulated results of HITRAN as the standard spectral line, the spectral calibration accuracy of long-wave NH3 absorption spectral lines under different off-axis angles and pixel field of view angles is studied. There is a quadratic function relationship between the spectral calibration accuracy and the angles. The sensitivity to off-axis angles is much higher than that to the pixel field of view angles, the spectral calibration accuracy of the central pixel is -18.84×10-6, and the spectral calibration accuracy of the outermost pixel is -451×10-6. Meanwhile, the spectral calibration accuracy caused by position error and pixel size error under the existing optical field of view testing conditions is 1.3×10-6 and 12×10-6 respectively. We have studied pre-launch spectral calibration and calibration methods based on instrument optical characteristics and completed the pre-launch spectral performance evaluation of FY-3F/HIRAS-Ⅱ. After spectral correction, the maximum spectral calibration accuracy of each pixel in the three bands is 2.23×10-6, which meets the spectral calibration index requirement of 7×10-6. Additionally, our study also has guiding significance for designing and testing optical field parameters in the future and improving spectral calibration accuracy.
.