• Opto-Electronic Advances
  • Vol. 3, Issue 11, 200048-1 (2020)
Anton V. Saetchnikov1、2、*, Elina A. Tcherniavskaia3, Vladimir A. Saetchnikov2, and Andreas Ostendorf1
Author Affiliations
  • 1Applied Laser Technologies, Ruhr University Bochum, Bochum 44801, Germany
  • 2Radio Physics Department, Belarusian State University, Minsk 220064, Belarus
  • 3Physics Department, Belarusian State University, Minsk 220030, Belarus
  • show less
    DOI: 10.29026/oea.2020.200048 Cite this Article
    Anton V. Saetchnikov, Elina A. Tcherniavskaia, Vladimir A. Saetchnikov, Andreas Ostendorf. Deep-learning powered whispering gallery mode sensor based on multiplexed imaging at fixed frequency[J]. Opto-Electronic Advances, 2020, 3(11): 200048-1 Copy Citation Text show less

    Abstract

    During the last decades the whispering gallery mode based sensors have become a prominent solution for label-free sensing of various physical and chemical parameters. At the same time, the widespread utilization of the approach is hindered by the restricted applicability of the known configurations for ambient variations quantification outside the laboratory conditions and their low affordability, where necessity on the spectrally-resolved data collection is among the main limiting factors. In this paper we demonstrate the first realization of an affordable whispering gallery mode sensor powered by deep learning and multi-resonator imaging at a fixed frequency. It has been shown that the approach enables refractive index unit (RIU) prediction with an absolute error at 3×10-6 level for dynamic range of the RIU variations from 0 to 2×10-3 with temporal resolution of several milliseconds and instrument-driven detection limit of 3×10-5. High sensing accuracy together with instrumental affordability and production simplicity places the reported detector among the most cost-effective realizations of the whispering gallery mode approach. The proposed solution is expected to have a great impact on the shift of the whole sensing paradigm away from the model-based and to the flexible self-learning solutions.

    Introduction

    Optical resonance in the dielectric circular microcavities referred to as the effect of whispering gallery modes (WGM) has drawn a great attention as a highly sensitive and label-free instrument for biochemical components detection1-4. Confinement and guiding of the optical ray along the microcavity's periphery, which meets the resonance conditions when the returning light wave starts to interfere with itself, form a WGM that is characterized by high quality (Q)-factors5, 6. Various microcavity geometries that may support WMGs have been reported so far (e.g., spheres, disks, toroids, capillaries, etc.) together with different methods for their fabrication where the lithography technique prevails7-10. The efficient light coupling into the cavity is realized via the evanescent field, where among others the prism-based method yields to the tapered fiber one (most widely employed) in efficiency, but excels in robustness and affordability11. The sensing mechanism of a WGM instrument is based on the response of the mode field to the variations in the ambient environment via the evanescent wave. Based on the nature of the external stimuli, one can distinguish between the monitoring of the resonance frequency shift (used for detection of the refractive index variations and analytes adsorption), and of the linewidth broadening as well as mode splitting (for single nanoparticle or biomolecule detection)12. Among the frequent methods for detection limit (LOD) enhancement are the local field amplification initiated by single metallic nanoparticles13, 14, and doping of the microresonator material with gain material to enable lasing15, 16. Recently the mode-splitting enhancement in the lasing resonators at so-called exceptional points has been demonstrated17.

    In order to measure a WGM signal, a narrow linewidth frequency sweeping source or high resolution spectroscopic units together with spectral data processing methods are commonly utilized. In addition, the common instrument configuration is restricted to the single microresonator interrogation and is based on transmitted (unmodified) or scattered (doped with fluorescence label) intensity collection with a photodiode to trace the resonance position in the acquired spectrum. This limits the sensor performance in terms of temporal and/or spectral resolution and restricts the utilization of the WGM instrument outside the laboratory. In contrast, an intensity-based signal detection captured at a fixed laser frequency greatly simplifies the detector and increases its integrability and portability18. For active (doped with fluorescent label) microcavities, a fixed wavelength pulsed laser source together with spectrally-resolving instruments are commonly used to observe WGM variations. At the same time, the necessity of microcavity doping limits the range of the host materials and complicates the fabrication process19. The use of the alternative WGM coupling/detection schemes such as self-heterodyned microlasing15, mode locking20, or ring-up spectroscopy21 implies complication of the interrogation and signal processing procedures.

    The demand on microcavity-based sensing solutions applicable for real-live tasks outside the laboratory is continuously increasing22-24. An option for practical applications might be the instrument configuration that is based on imaging of the WGM signal enabled with a CCD camera in the optical prism or parallel plate-based excitation scheme that inherently supports the scalability without changes in the instrument arrangement25-30. The signal collection for this configuration has been originally based on detection of the variations of the radiated WGM energy while sweeping the wavelength of the laser source26. Later on, the signal collection in the form of the excited fluorescence of the dyes attached to the microspheres surface during laser wavelength sweeping has been reported27. Recently, the reusable biochemical sensor in the form of an array of randomly allocated unmodified glass microspheres with radiated WGM signal imaging in prism excitation scheme has been demonstrated29, 30. Relatively simple sensor fabrication, reusability, and possibility for multicavity signal collection makes this platform especially attractive. The multidimensional nature of the captured signal in this case complicates the interpretation of the external variations utilizing the analytical descriptions and can be addressed by self-learning algorithms. Only a few examples of application of the machine learning approaches together with the optical microresonators have been shown so far. Among them are the demonstration of the classification of different biochemical solutions based on WGM spectral changes31 and a recent application of the multiple resonant modes of a single microresonator for two-parameter estimations32. Both solutions imply the preliminary WGM spectrum collection followed by the spectral features extraction that acts as the input variables for machine learning algorithms. Other recent examples report on the application of the machine learning methods to the optical sensing for the attenuated total reflectance technique used for humidity detection33 and for the localized surface plasmon resonance used for refractive index prediction34.

    In this paper, we report the novel self-learning whispering gallery mode sensor that utilizes multi-microcavity imaging scheme guaranteed by a cost-effective laser operating at a fixed frequency. The prospects of the deep-learning based quantification of the external variations have been demonstrated on example of the refractive index detection.

    Materials and methods

    Samples

    A microresonator-based sensor has been fabricated in a form of numerous glass microspheres (Cospheric LLC) with the mean diameter of 100 μm. The microcavities have been first ultrasonically cleaned to remove possible surface contaminations and then randomly allocated on the glass cover slip by free fall. Immobilization of the microspheres is achieved with a thin adhesive layer of the water-matched refractive index (MyPolymer MY-133MC) deposited in advance onto the cover slip surface29. The individual microresonators are distant from others in average on 1.06 mm with the minimal distance to the nearest neighbor of 156 μm. The homogeneity of the adhesive layer has been ensured by the spin coating procedure (SPS-Europe Spin 150) of the 10 μl drop whose parameters (rotation at 6000 min-1 for 60 s) have been experimentally optimized to meet a trade-off between the microresonators loaded quality factors, their sensitivity and layer durability. The average fixation layer thickness is measured on the level of 540 nm. Finally, the coverslip with microcavities is integrated into the fluidic cell with measurement chamber for analyte.

    Instrument

    The WGM excitation is performed via optical prism and the scattered light is collected in the WGM imaging scheme by a conventional monochrome CCD camera that allows signal acquisition from several hundreds of resonators simultaneously (Fig. 1). The optical prism with antireflection coatings on the side facets has been selected to reduce parasitic background signal caused by the back-reflections inside the prism.

    Overview of the instrument configuration: 1-laser diode; 2-collimation lens; 3-camera; 4-beam dump; 5-right angle optical prism; 6-adhesive thin layer; 7-microresonator.

    Figure 1.Overview of the instrument configuration: 1-laser diode; 2-collimation lens; 3-camera; 4-beam dump; 5-right angle optical prism; 6-adhesive thin layer; 7-microresonator.

    A vertical-cavity surface-emitting laser (VCSEL) operating at 850 nm (Thorlabs L850VG1) and collimated with an aspheric lens has been selected for WGM signal observation at a fixed wavelength. A VCSEL benefits for WGM application with both spatial and spectral single mode lasing, Gaussian-like profile, high energy efficiency with a threshold current at several mA, and affordability. The laser diode is stabilized by current (Thorlabs LDC200CV) and temperature (Thorlabs TED200C) controllers. The parameters of the laser diode controllers have been set to 3.9 mA and 24℃ for all experiments discussed in this paper. The long-term stability of the lasing properties over more than three hours is characterized by relative power variability below 10-4 and the wavelength shift measured with a wavemeter (EXFO WA-1500) is below 0.4 pm with no hops observed. A tunable diode laser (New Focus, 680 nm) has been utilized for WGM spectra collection via frequency sweeping with a speed of 0.1 nm/s for comparison reasons.

    Signal features

    The temporal consistency of the WGM spectra in the constant ambient conditions and the loaded Q-factors of microspheres of 103-10529 together with laser illumination stability enable intensity-based detection. The captured signal represents a set of intensities that are modulated by the spectral properties of each microcavity with respect to the spectral position and the width of the laser line (Fig. 2(a)). The signal for a single microsphere is defined as a sum of intensities for cavity-related camera pixels that are determined according to the previously established localization procedure35. The WGM spectra of the microcavities have a unique form caused by the shape, surface quality, and coupling efficiency variations (Fig. 2(b)). Different allocation with respect to the illumination profile and possibility of multimode excitation enable the captured signal to possess numerous degrees of freedom. This allows multi-parameter analysis of small ambient disturbances.

    Overview of the WGM signal.

    Figure 2.Overview of the WGM signal.

    Results and discussion

    Several experiments have been performed in order to analyze the performance of the fixed frequency interrogation imaging scheme for sensing with passive microcavity-based sensors. The first study was intended to compare this approach with conventional resonance frequency tracking method. The second study is focused on processing the signals measured under constant frequency illumination with deep-learning methods for quantification of the ambient variations.

    Comparison study

    A set of spectral shift variations represented in Fig. 3(a) shows the sensor response measured in the frequency sweeping scheme on the external refractive index change on 2.3×10-3. The step-like change is observed where the difference in the absolute values is regulated by the individual microresonator sensing properties. The same external impact measured at the fixed frequency is demonstrated in Fig. 3(b). For a represented set of more than 140 microcavities, a step-like change of the radiated intensities similar to the spectral shift variations is observed for majority of the resonators. In contrast to the frequency sweeping scheme where spectral shift is observed for each microresonator, the data collected in the constant frequency illumination scheme represents a complex non-linear modulation of the external perturbations with unique spectral features of each microresonator. Consequently, the resonators with resonance frequency close to the one chosen for interrogation and more prominent resonance peaks show major variations whereas the others show minor contrast or almost no changes.

    Comparison of the experimental data collected in the laser frequency sweeping and the fixed frequency schemes for the same sensor sample under changing ambient refractive index.

    Figure 3.Comparison of the experimental data collected in the laser frequency sweeping and the fixed frequency schemes for the same sensor sample under changing ambient refractive index.

    Intensity sets for different wavelengths with a step of 1 pm over the whole sweeping range (1.5 nm) have been extracted from the tunable laser data and compared with the generalized spectral shift to study the impact of the chosen illumination wavelength and the resonators number on accordance of the measured variations for both sensing schemes. The generalized dynamics for a sensor as a whole for sweeping and constant frequency have been calculated via principle component analysis where time points act as the objects. Statistics on correlation of the temporal variations of the scaled values of the first principle component for different illumination wavelengths as a function of the number of the microcavities is represented in Fig. 3(c). The accordance between the datasets improves as the number of resonators increases where starting from 100 resonators in the scope the correlation value exceeds 0.98 being only slightly affected by the wavelength selection.

    The temporal variations of the scaled values of the main principle component for sweeping (Fig. 3(a), dots) and constant (Fig. 3(b), solid line) frequency results are represented in Fig. 3(d). The accordance of the extracted dynamics is characterized by the mean square error (MSE) value below 2×10-3 where the constant frequency illumination scheme allows to improve the temporal resolution by several orders of magnitude being only limited by the camera framerate (several tens of ms). This enables the analysis of the intermediate sensing states during the changes in the ambient medium.

    Thus, the fixed frequency interrogation approach is a promising affordable alternative to frequency sweeping technique offering the enhanced temporal resolution and being unaffected by the choice of the laser frequency when more than a hundred of microresonators are considered simultaneously. Principle component analysis for data processing has demonstrated excellent correlation of the scaled temporal features, however, the description of the measured response in terms of the external impact still requires the knowledge on the spectral features of particular microcavities. The complex response of the set of microresonators cannot be generalized for the sensor as a whole using the analytical approaches. Thus, the regression problem for estimation of the ambient parameters can be addressed with the self-learning solutions.

    Deep-learning analysis

    A set of sensor responses representing different external refractive indexes under illumination with a constant frequency has been captured for the multiresonator imaging signal interpretation. The ambient variations are provided by different ethanol concentrations dissolved in the deionized water (from 0% to 4.2%) that correspond to the refractive index changes with maximum value of 2.3×10-3 and minimum change of 3×10-4. The analysis of the sensor performance on water/ethanol mixtures is a common technique where the RIU variations can be found in the literature. The collected signal for the sample with 112 microresonators which consists of 3888 intensity values grouped into 9 data sets representing different refractive index states is shown in Fig. 4. These results exhibit the step-like variations of the radiated intensities by the individual microcavities when the change of the environmental conditions occurs. The data also clearly shows the non-linear character of captured intensities variations where some resonators show continuous enhancement or descent of the radiated intensity, the others show non-permanent dynamics, and the rest remains "silent" to the external variations.

    Sensor response on the refractive index changes.

    Figure 4.Sensor response on the refractive index changes.

    Despite the overall non-linear character of the captured intensity variations, the difference between the two slightly different RIU states may be described with the linear function for each particular resonator independently. Taking into account the complex multidimensional nature of the collected signal the LOD of the multicavity sensor will be described by the best LOD value derived for a single element. In order to estimate the LOD, we have extracted the first ambient state (water solution) as a blank sample whose measurement has been repeated 360 times and the second state (0.5% ethanol) repeated 311 times (Fig. 4). Primarily the sensitivity on radiated intensity vs. RIU change (ri) and the standard deviation of the radiated intensities for the 0.5% of ethanol (si) has been calculated for each resonator (i = 1:112) in the scope. Then a set of LODs has been calculated as LODi = tsi/ri, where t is the 95th percentile of the Student's t-distribution with 310 degrees of freedom36, 37. Hereby the LOD of the multiresonator sensor interrogated at the fixed frequency has been determined on the level of 4×10-5 that is comparable for the previously reported results for interrogation with a sweeping source29.

    We have tested several methods and algorithms to solve the regression problem for the measured data: deep neural network (dNN), linear regression (LR), random forest (RF), general regression neural network (GRNN), gradient boosting (GB) and support vector regression (SVR). The algorithms have been realized with Python using tensorflow, keras, and sklearn libraries. Tensorflow and keras libraries have been used for dNN processing whereas sklearn library was employed for other methods. The values have been preliminary processed so that the resonators that show absence of the resonance behaviour (constant zero value over time) have been excluded from the analysis and the temporal variations for other resonators have been scaled to the [0:1] range. The structure of the dNN is specified by 32 neurons in the input layer, 3 hidden layers with 64 neurons, tanh activation function, and Adam optimization. The detailed study on the impact of the dNN parameters is given onwards. The coefficients for linear regression are solved via non-parametric inverse matrix notation. The RF method is based on 100 decision trees with a random state parameter of 42 for providing randomness of the bootstrapping of the samples. The number of the decision trees has been optimized in the range [50:150] with the step of 25 to achieve the maximum accuracy on testing phase without overfitting. In order to determine the depth of the tree, the nodes were expanded until all leaves contained less than two samples (minimum number of samples required to split an internal node). The standard deviation parameter for GRNN method has been set to 0.1. This value has been set as nominal and has not been optimized since this selection commonly ensures the best algorithm efficiency for the scaled data in the [0:1] range. Since the GB algorithm is in general robust to overfitting, the optimal number of the boosting stages has been determined by increasing them from 100 (default) with a step of 50 until the testing accuracy evaluated with MSE metrics (Friedman MSE) started to decrease. Optimization of other parameters of the GB algorithm has been determined to have minor impact on the training process and testing results, so they have been selected as following: 4 maximum depth nodes in the tree, 5 samples to split an internal node, learning rate of 0.01, 2 minimum samples split, 3 maximum depth of the individual regression estimators. SVR method is based on the radial basis functions kernel with the 0.001 tolerance (used as a stopping criterion).

    The results for the absolute error between the measured and the predicted data for different processing methods are represented in Fig. 5(a). GRNN, RF, and dNN enable better than other methods sensing data predictions where the mean error value does not exceed 5×10-7 RIU. In contrast to the dNN algorithm that generates more minor spread of the error values than other methods, GRNN and RF methods show extremely low error values. At the same time, these methods may generate outliers with the values comparable to the measurement points in the experimental dataset (~ 10-4) whereas for the dNN it does not exceed (7×10-6). Moreover, the GRNN method that produces the best mean error value is well suitable to generalize the experimental features to the outputs whereas its performance substantially drops when it is used for the non-pre-trained features. Finally, the dNN method is characterized by a wide range of parameters that can be optimized to the best fit of the experimental data and, thus, it has been selected for further study.

    (a) Distribution of the absolute error values between the measured refractive indexes and the values predicted with different processing approaches: deep neural network (dNN), linear regression (LR), random forest (RF), general regression neural network (GRNN), gradient boosting (GB), and support vector regression (SVR). (b) Statistics on the performance of the refractive index prediction with dNN approach with different combinations of weights optimization methods (Adam, RMSprop, Nadam, Adagrad, and Adadelta) and activation functions (tanh, sigmoid, relu, selu, linear, and softplus).

    Figure 5.(a) Distribution of the absolute error values between the measured refractive indexes and the values predicted with different processing approaches: deep neural network (dNN), linear regression (LR), random forest (RF), general regression neural network (GRNN), gradient boosting (GB), and support vector regression (SVR). (b) Statistics on the performance of the refractive index prediction with dNN approach with different combinations of weights optimization methods (Adam, RMSprop, Nadam, Adagrad, and Adadelta) and activation functions (tanh, sigmoid, relu, selu, linear, and softplus).

    We performed three different tests on Nvidia Tesla K80 GPU in order to analyze the performance of the dNN for prediction of the RIU values from the known microcavities intensities measured at a fixed laser frequency. The training process implies maximum 1500 epochs and terminates when the relative change of the mean squared error averaged over the last 30 epochs for both training and validation datasets holds below 5×10-4 during the last 10 training iterations.

    In the first test the impact of the optimization method (Adam, RMSprop, Nadam, Adagrad, and Adadelta) and activation function (tanh, sigmoid, relu, selu, linear, and softplus) has been studied. In contrast to the standard gradient descent methods used in the back-propagation algorithm, the tested optimization methods are supplemented by the momentum and adaptive learning rates that commonly provide faster training speed and higher accuracy, especially for a deep neural network. The wide range of the tested activation functions is dictated by the unique nature of the collected data. The dNN architecture is kept same as discussed previously (Fig. 5(a)): input layer with 32 neurons and 3 hidden layers with 64 neurons. Figure 5(b) represents the results for all combinations of the activation functions and optimization methods where five training repetitions have been performed for each combination. For each of the training repetitions, the experimental dataset has been split into training (70%), validation (15%), and test (15%) parts where the values have been randomly selected and the ratio between the parts was kept constant for all output values. The results show that the linear activation function which is independent on the optimization method delivers the largest error value. The following combinations of the activation functions with optimization methods as tanh + Adam, relu + Adam, and sigmoid + Nadam show the median error value below 3×10-7 RIU for predicted data. Among them, only the tanh + Adam combination enables both the lowest median error value and the minimum amount of the outliers whose value does not exceed 3×10-6 RIU.

    The second test is intended to study the influence of the dNN architecture on the predicted data error where the number of neurons in the input layer (n = 16, 32, 48, 56, 64, 80, 96, 112, 128, 256, 512, 1024) and the number of the hidden layers (3, 4, 5, 6) with 2n neurons have been varied. The results of this study where hyperparameters tuning for each particular dNN configuration has been repeated five times as well and the splitting proportion has been held as discussed previously (70%:15%:15%) as is shown in Fig. 6(a). The dNN architecture with 16 neurons in the input layer shows a poor match of the predicted results to the experimental data with ≈ 1.5×10-5 RIU error. The other results show the implicit correlation of the number of layers and neurons on the predicted error. This is mostly caused by the training termination criterion as well as limited training repetitions and thus variability among the training, validation, and test datasets. Nevertheless, the dNN architecture out of 3 hidden layers dominates among the best results and allows to reach median error value below 1×10-7 RIU. On the other hand, the dNN with 6 hidden layers has the highest median error values. The best result has been calculated for the dNN with 3 hidden layers with 48 neurons in the input layer with the median error of 3×10-8 RIU and the outliers that may reach maximum 1×10-6 RIU.

    Statistics on the refractive index prediction accuracy represented as absolute error values for different dNN configurations with varying number of neurons (N) in the input layer (n = 16, 32, 48, 56, 64, 80, 96, 112, 128, 256, 512, 1024) and hidden layers (L) number (3, 4, 5, 6) with 2n neurons.

    Figure 6.Statistics on the refractive index prediction accuracy represented as absolute error values for different dNN configurations with varying number of neurons (N) in the input layer (n = 16, 32, 48, 56, 64, 80, 96, 112, 128, 256, 512, 1024) and hidden layers (L) number (3, 4, 5, 6) with 2n neurons.

    Due to the limited set of the measured RIU states used for dNN training which is always the case for the experiment, we may face a classification problem instead of the regression one. For that reason, a complete set of experimental data that represents the sensor response on specific RIU value (sensing phase in Fig. 4) have been excluded from the training process. In contrast to the previously performed tests where the whole set of the measured RIU responses has been used for dNN training, this test checks the prediction correctness for the unknown output RIU state and thus shows the accuracy of the regression problem solution.

    The results for these calculations represented in Fig. 6(b) show clear correlation of the error values with the number of neurons in the layers. The statistics summarizes the results for the RIU prediction accuracy where each output RIU class (except of the extremum values of the first and the last sensing states) from the experimental dataset has been initially completely excluded from the hyperparameters tuning procedure and then has been analyzed with the dNN. For the data used in the dNN optimization procedure, the proportions among the test, training, and validation parts were kept same as it was mentioned previously. It has been found out that the number of the hidden layers insufficiently impacts the prediction error. For neuron numbers 64 and 80, the error value may reach ~10-4 that corresponds to the measurement step in the acquired experimental data. The error median values for smaller neuron number in the layers (16, 32, and 48) lie below 5×10-6 where the outliers do not exceed the value of 3×10-5. The observed result is explained by switching from the regression to the classification problem when the complexity of the dNN architecture increases.

    Summing up the results for all tests together, the optimal dNN architecture for this task is expected to be 3 hidden layers with 32 neurons in the input layer and 64 neurons in the hidden layers, with tanh activation function and Adam optimization method. This configuration enables to keep the prediction error for RIU below 1×10-5 with median value of 3×10-6 for the unknown experimental results.

    Conclusions

    In this work we have demonstrated the first example of an affordable self-learning whispering gallery mode sensor and analyzed its performance on refractive index variations detection. A cost- and energy-effective laser source operating at the fixed frequency, multi-cavity interrogation imaging scheme and deep-learning analysis are the key distinguishing features that enable high resolution sensing data quantification where preliminary information or procedures are redundant. The comparison with the commonly utilized method for tracking the spectral position of the resonance frequency shows the improvement in the temporal resolution by at least two orders of magnitude. It has been shown that the selected instrument configuration provides the detection limit for the refractive index variations estimations of at least 4×10-5. The study on several architectures of the deep neural networks for RIU detection shows possibility to keep the absolute error between the measured RIU values and the values predicted by the dNN at 3×10-6 level for the dynamic range of RIU variations from 0 to 2×10-3.

    The reported results demonstrate the possibility for construction of the self-learning sensing solutions with the affordable instrument configuration, reduced complexity and device size for the first time, and are expected to significantly contribute to the change of the sensing paradigm from model-based to machine learning inspired approach. The proposed sensor supplemented by the essential set of training data that can be automatically collected may be utilized in the wide range of the practice-oriented sensing tasks where prior data about the response model is redundant. The trained NN is well applicable for any other solutions that can be dissolved in the deionized water at different concentrations resulting in the bulk refractive index change in the range from 0 to 2×10-3 relative to the nominal one (water) where no interaction of the sensed solution with the microresonator material is observed. Moreover, the proposed approach is easily expandable for the detection of several physical/chemical parameters simultaneously. In this case the extended experimental dataset representing different external conditions has to be gathered whereas the detector as instrument remains unchanged. In addition, this method is expected to be applicable for the case of the targeted biochemical molecules' detection with preliminary microresonator surface processing with corresponding receptor and will be addressed in the follow-up research.

    Author contributions

    A. Saetchnikov conceived the work, conducted the experiments, wrote the paper. A. Saetchnikov and E. Tcherniavskaia conducted the deep-learning data processing. V. Saetchnikov supervised the experiments. A. Ostendorf supervised and directed the research. All authors discussed the results and commented on the manuscript.

    Competing interests

    The authors declare no competing financial interests.

    References

    [1] F Vollmer, S Arnold. Whispering-gallery-mode biosensing: label-free detection down to single molecules. Nat Methods, 5, 591-596(2008).

    [2] M R Foreman, J D Swaim, F Vollmer. Whispering gallery mode sensors. Adv Opt Photonics, 7, 168-240(2015).

    [3] Y N Zhang, T M Zhou, B Han, A Z Zhang, Y Zhao. Optical bio-chemical sensors based on whispering gallery mode resonators. Nanoscale, 10, 13832-13856(2018).

    [4] X F Jiang, A J Qavi, S H Huang, L Yang. Whispering-gallery sensors. Matter, 3, 371-392(2020).

    [5] V B Braginsky, M L Gorodetsky, V S Ilchenko. Quality-factor and nonlinear properties of optical whispering-gallery modes. Phys Lett A, 137, 393-397(1989).

    [6] K J Vahala. Optical microcavities. Nature, 424, 839-846(2003).

    [7] F Vollmer, D Braun, A Libchaber, M Khoshsima, I Teraoka et al. Protein detection by optical shift of a resonant microcavity. Appl Phys Lett, 80, 4057(2002).

    [8] D K Armani, T J Kippenberg, S M Spillane, K J Vahala. Ultra-high-q toroid microcavity on a chip. Nature, 421, 925-928(2003).

    [9] T J Kippenberg, S M Spillane, D K Armani, K J Vahala. Fabrication and coupling to planar high-Q silica disk microcavities. Appl Phys Lett, 83, 797-799(2003).

    [10] I M White, H Oveys, X D Fan. Liquid-core optical ring-resonator sensors. Opt Lett, 31, 1319-1321(2006).

    [11] F Vollmer, L Yang. Review label-free detection with high-q microcavities: a review of biosensing mechanisms for integrated devices. Nanophotonics, 1, 267-291(2012).

    [12] Y Y Zhi, X C Yu, Q H Gong, L Yang, Y F Xiao. Single nanoparticle detection using optical microcavities. Adv Mate, 29, 1604920(2017).

    [13] V R Dantham, S Holler, C Barbre, D Keng, V Kolchenko et al. Label-free detection of single protein using a nanoplasmonic-photonic hybrid microcavity. Nano Lett, 13, 3347-3351(2013).

    [14] M D Baaske, F Vollmer. Optical observation of single atomic ions interacting with plasmonic nanorods in aqueous solution. Nat Photonics, 10, 733-739(2016).

    [15] L N He, Ş K Özdemir, J G Zhu, W Kim, L Yang. Detecting single viruses and nanoparticles using whispering gallery microlasers. Nat Nanotechnol, 6, 428-432(2011).

    [16] U Bog, T Laue, T Grossmann, T Beck, T Wienhold et al. On-chip microlasers for biomolecular detection via highly localized deposition of a multifunctional phospholipid ink. Lab Chip, 13, 2701-2707(2013).

    [17] W J Chen, Ş K Özdemir, G M Zhao, J Wiersig, L Yang. Exceptional points enhance sensing in an optical microcavity. Nature, 548, 192-196(2017).

    [18] X Zhou, L Zhang, W Pang. Performance and noise analysis of optical microresonator-based biochemical sensors using intensity detection. Opt Express, 24, 18197-18208(2016).

    [19] T Reynolds, N Riesen, A Meldrum, X D Fan, J M M Hall et al. Fluorescent and lasing whispering gallery mode microresonators for sensing applications. Laser Photonics Rev, 11, 1600265(2017).

    [20] J D Swaim, J Knittel, W P Bowen. Detection of nanoparticles with a frequency locked whispering gallery mode microresonator. Appl Phys Lett, 102, 183106(2013).

    [21] S Rosenblum, Y Lovsky, L Arazi, F Vollmer, B Dayan. Cavity ring-up spectroscopy for ultrafast sensing with optical microresonators. Nat Commun, 6, 6788(2015).

    [22] G C Righini, S Soria. Biosensing by wgm microspherical resonators. Sensors, 16, 905(2016).

    [23] J Su. Label-free biological and chemical sensing using whispering gallery mode optical resonators: past, present, and future. Sensors, 17, 540(2017).

    [24] L Cai, J Y Pan, Y Zhao, J Wang, S Xiao. Whispering gallery mode optical microresonators: structures and sensing applications. Phys Status Solidi A, 217, 1900825(2020).

    [25] G Schweiger, R Nett, T Weigel. Microresonator array for high-resolution spectroscopy. Opt Lett, 32, 2644-2646(2007).

    [26] V A Saetchnikov, E A Tcherniavskaia. Using optical resonance of whispering gallery modes in microspheres for real-time detection and identification of biological compounds. J Appl Spectrosc, 77, 714-721(2010).

    [27] H A Huckabay, S M Wildgen, R C Dunn. Label-free detection of ovarian cancer biomarkers using whispering gallery mode imaging. Biosens Bioelectron, 45, 223-229(2013).

    [28] A B Petermann, A Varkentin, B Roth, U Morgner, M Meinhardt-Wollweber. All-polymer whispering gallery mode sensor system. Opt Express, 24, 6052-6062(2016).

    [29] A V Saetchnikov, E A Tcherniavskaia, V V Skakun, V A Saetchnikov, A Ostendorf. Reusable dispersed resonators-based biochemical sensor for parallel probing. IEEE Sens J, 19, 7644-7651(2019).

    [30] Proceedings Volume 11354, Optical Sensing and Detection VI 1135427 (SPIE, 2020); https://doi.org/10.1117/12.2555391.

    [31] E A Tcherniavskaia, V A Saetchnikov. Application of neural networks for classification of biological compounds from the characteristics of whispering-gallery-mode optical resonance. J Appl Spectrosc, 78, 457-460(2011).

    [32] D Hu, C L Zou, H L Ren, J Lu, Z C Le et al. Multi-parameter sensing in a multimode self-interference micro-ring resonator by machine learning. Sensors, 20, 709(2020).

    [33] V V Kornienko, I A Nechepurenko, P N Tananaev, E D Chubchev, A S Baburin et al. Machine learning for optical gas sensing: a leaky-mode humidity sensor as example. IEEE Sens J, 20, 6954-6963(2020).

    [34] Z S Ballard, D Shir, A Bhardwaj, S Bazargan, S Sathianathan et al. Computational sensing using low-cost and mobile plasmonic readers designed by machine learning. ACS nano, 11, 2266-2274(2017).

    [35] Proceedings Volume 10678, Optical Micro- and Nanometrology VⅡ 106780W (SPIE, 2018); http://doi.org/10.1117/12.2309660.

    [36] Harris D C. Quantitative Chemical Analysis 6th ed (W. H. Freeman, New York, 2003).

    [37] H P Loock, P D Wentzell. Detection limits of chemical sensors: applications and misapplications. Sens Actuators B Chem, 173, 157-163(2012).

    Anton V. Saetchnikov, Elina A. Tcherniavskaia, Vladimir A. Saetchnikov, Andreas Ostendorf. Deep-learning powered whispering gallery mode sensor based on multiplexed imaging at fixed frequency[J]. Opto-Electronic Advances, 2020, 3(11): 200048-1
    Download Citation