• Chinese Optics Letters
  • Vol. 21, Issue 11, 110009 (2023)
Ruping Deng1、2, Yuan Song2, Jiahao Yang2, Changjun Min2, Yuquan Zhang2、*, Xiaocong Yuan2, and Weiwei Liu1
Author Affiliations
  • 1Institute of Modern Optics, Tianjin Key Laboratory of Micro-scale Optical Information Science and Technology, Nankai University, Tianjin 300350, China
  • 2Nanophotonics Research Centre, Institute of Microscale Optoelectronics & State Key Laboratory of Radio Frequency Heterogeneous Integration, Shenzhen University, Shenzhen 518060, China
  • show less
    DOI: 10.3788/COL202321.110009 Cite this Article Set citation alerts
    Ruping Deng, Yuan Song, Jiahao Yang, Changjun Min, Yuquan Zhang, Xiaocong Yuan, Weiwei Liu. AI-assisted cell identification and optical sorting [Invited][J]. Chinese Optics Letters, 2023, 21(11): 110009 Copy Citation Text show less

    Abstract

    Cell identification and sorting have been hot topics recently. However, most conventional approaches can only predict the category of a single target, and lack the ability to perform multitarget tasks to provide coordinate information of the targets. This limits the development of high-throughput cell screening technologies. Fortunately, artificial intelligence (AI) systems based on deep-learning algorithms provide the possibility to extract hidden features of cells from original image information. Here, we demonstrate an AI-assisted multitarget processing system for cell identification and sorting. With this system, each target cell can be swiftly and accurately identified in a mixture by extracting cell morphological features, whereafter accurate cell sorting is achieved through noninvasive manipulation by optical tweezers. The AI-assisted model shows promise in guiding the precise manipulation and intelligent detection of high-flux cells, thereby realizing semi-automatic cell research.

    1. Introduction

    As a crucial component in life research, cellular analysis helps to reveal changes in cell differentiation, metabolism, gene expression[1,2], etc. Among the various technical approaches available, the cell identification and sorting technologies provide a manner to extract individual cell or specific cell groups. Cell identification is the essential precondition for cell sorting. The typical identification methods include immunomagnetic beads[3,4], cell surface or fluorescence labeling[5], and so on. However, these methods have certain limitations (for instance, the essential pretreatment labeling of samples, and the random changes in the properties of target cells caused by the labels)[6,7]. These defects thus affect their subsequent promotion and application.

    The vision-based neural networks can directly use bright-field imaging data for cell classification[810]. It not only overcomes the limitations of traditional cell separation techniques, but also improves the automation level of the system. Machine vision, which has been widely used in facial recognition[11], autonomous driving[12], and image detection[1316], is an extension of human perception and allows computers to extract image feature information through convolutional neural networks (CNNs) to obtain and perceive relevant information. By training with a large data set, such an intelligent model can achieve classification predictions within milliseconds. It thus greatly improves work efficiency by reducing time consumption in the identification process and eventually achieves efficient and accurate identification and location of the targets. However, it is only suitable for the classification prediction of a single cell, and is not well-suited in coping with situations where multiple targets exist in the field of view simultaneously.

    As for the cell sorting, it is typically carried out by the feat of gravity[17], centrifugal force[18], and sound waves[1921]. In these methods, high throughput operation is possible, but certain cytoactive effects are inevitable, and precise manipulation is still a challenge. The optical tweezers technique, due to its noncontact and nondestructive characteristics, is a potent alternative for this topic[2224]. In this work, we demonstrate an artificial intelligence (AI)-assisted cell identification and sorting system in a microfluidic chip. In the system, we employ the you only look once (YOLO) object detection method based on a one-step CNN as the “brain” of the system. The YOLO model classifies all cell targets in mixture sample and accurately locates the target-cell’s position in the microfluidic channel at a millisecond order. After adaptive training, the average inference speed of the model is about 30 ms. On this basis, target cells are precisely sorted into an individual chamber by optical tweezers, which provides an independent space for cell cultivation and further research. This work provides a powerful platform for cell identification and sorting and is expected to play a role in future development of cellular studies.

    2. Methods

    Figure 1 depicts schematic of the AI-assisted optical system for cell identification and sorting. A 532 nm laser (MSL-FN-532-S, 400 mW) is first expanded to fit the size of a spatial light modulator (SLM) (PLUTO-2.1, 60 Hz, HOLOEYE), and then a spherical wave is generated by loading a predesigned hologram onto the SLM. To fine-tune the focal field, the zeroth-order diffraction beam is filtered, and the remaining first-order light field is modulated by a 4f system before its incidence into the objective (40×, NA=0.69). An attenuation device composed of a half-wave plate and a polarizer is inserted in front of the objective lens, where the half-wave plate is mounted on a precise rotating table to accurately modulate intensity of the focus field. A microfluidic channel chip is employed to inject a sample solution, where the inlet and outlet of the chip are connected to the microfluidic injection air pump, to keep the entire chip system working in a closed state. By adjusting the injection pressure, the flow rate of the solution inside the chip can be controlled. Finally, the image is captured by CCD camera after filtering the obtrusive laser signal and then is fetched by the AI program for target detection.

    Experimental configuration of the AI-assisted optical system for cell identification and sorting.

    Figure 1.Experimental configuration of the AI-assisted optical system for cell identification and sorting.

    The microfluidic chip is made of polydimethylsiloxane (PDMS) bonding on a glass substrate, which includes five main channels and twenty L-shaped subchambers. The L-shaped chambers are designed to prevent cells from being washed out when the microfluidics are in a high flow rate state in the main channel. The mixture solution is prepared by mixing yeast cells and polystyrene (PS) spheres with a diameter of 5 µm so that they may possess similar sizes. The mixture solution is centrifuged twice and then diluted with deionized water to avoid blocking the channels.

    Figure 2 depicts the experimental procedure of this work. First, deionized water is used to fill the microfluidic channel to drive out the air inside the channel and chambers [Fig. 2(a)]. Second, the mixture of yeast cells and PS spheres is injected at a rate of 1 nL/s into the chip [Fig. 2(b)], and the mixture solution will flow uniformly in the main channel sometime later. Afterwards, image detection of the yeast cells and PS spheres is performed [Fig. 2(c)]. Third, the YOLO algorithm is used to detect the samples in real time. In this process, the captured image is precompressed to a width of 1280 pixels. The black boxes in Fig. 2(d) indicate a sliding window, which sequentially selects a portion of the image for convolution with the convolutional kernel to extract similar features. The features information is extracted to generate feature vectors by a CNN. Subsequently, the feature vectors are used to infer the category probability and coordinate information of each target through a fully connected layer [Fig. 2(d)]. Here, each feature vector represents the feature information of an anchor box. In order to predict the target more accurately, every target is covered by multi-overlapped anchor boxes, and the box with the highest score is selected at the end after non-maximum suppression (NMS)[25], as depicted in Fig. 2(e). Because of the prediction inference information of the AI algorithm, it takes only 30 ms to complete the inference prediction for each image frame. Based on the identification results, the yeast cells of interest can be proactively manipulated into the appointed subchamber by optical tweezers [Fig. 2(f)], thus achieving the goal of sorting individual cells from the mixture.

    Experimental procedure of the AI-assisted cell identification and sorting. (a) Infiltration of the microfluidic chip; (b) mixed sample solution injection into the channel; (c) bright-field imaging of the mixture sample; (d) schematic of the model for image-based cell identification. The black boxes indicate sliding windows, ConvNets is employed for feature extraction, while a fully connected network is used for classification and regression. (e) Identification results of the samples: blue boxes mark cells; magenta boxes mark PS spheres; (f) sorting of target of interest by optical tweezers.

    Figure 2.Experimental procedure of the AI-assisted cell identification and sorting. (a) Infiltration of the microfluidic chip; (b) mixed sample solution injection into the channel; (c) bright-field imaging of the mixture sample; (d) schematic of the model for image-based cell identification. The black boxes indicate sliding windows, ConvNets is employed for feature extraction, while a fully connected network is used for classification and regression. (e) Identification results of the samples: blue boxes mark cells; magenta boxes mark PS spheres; (f) sorting of target of interest by optical tweezers.

    3. Results and Discussions

    Based on the above AI-assisted object detection algorithms, identification and localization of yeast cells in a mixture solution are achievable, effectively improving the accuracy of the distinguishing ability for samples with similar morphological sizes. It should be noted that the acquired images may lose detailed information inside the cell when it is slightly out of focus, resulting in a feature loss and defective resolution. To address this issue, a data set containing bright-field images under various magnifications and focal lengths is precreated to enhance its adaptability. Additionally, in order to reduce the probability of misidentifying the background as particulate samples, images with microfluidic channels in the background are requisite in the training set. After 200 iterative training sessions, the total inference prediction and NMS time for a single image frame is about 30 ms. The confusion matrix of the algorithm model, as shown in Fig. 3(a), indicates that almost all cells and PS spheres can be correctly classified. The first column in the confusion matrix represents all true instances of identification of the cells, where 95% of them are correctly predicted, while 5% are falsely predicted as the background. Here, FN represents the false negatives, indicating the proportion of missed detections; while FP is the false positives, reflecting the proportion of misclassifying background as the target. Therefore, the data in the third column indicate that, in the case of erroneously identifying the background as targets, 69% are misidentified as cells and 31% as PS spheres. Significantly, background misidentification is a rare event. There the third column only represents the ratio of the two FP types. Generally, the mean average precision (mAP), which is defined as the integral of the area under the precision-recall curve[26], is employed to measure the accuracy of a model. In our model, it has reached 0.98, being credible enough for target identification.

    Cell identification results. (a) Confusion matrix of the algorithm model; the background FP indicates the case of background being misidentified as targets, while background FN indicates the missed detections. (b) and (c) Experimental results of target identification; (b) is the original image frame, and (c) is the identification results in accordance with (b). Blue labels denote the yeast cells, and magenta the PS spheres. The numbers indicate the identification score of each target, while the green arrow indicates the flow direction in the channel.

    Figure 3.Cell identification results. (a) Confusion matrix of the algorithm model; the background FP indicates the case of background being misidentified as targets, while background FN indicates the missed detections. (b) and (c) Experimental results of target identification; (b) is the original image frame, and (c) is the identification results in accordance with (b). Blue labels denote the yeast cells, and magenta the PS spheres. The numbers indicate the identification score of each target, while the green arrow indicates the flow direction in the channel.

    In the AI algorithm model, the video captured by the CCD camera is used as the input source to calibrate the identification efficiency in experiments (Visualization 1). Figure 3(b1) is the full view of a discretionarily selected frame from the video. For a clearer presentation, Figs. 3(b2) and 3(b3) are enlarged views from another two frames within the labeled area by the red box in Fig. 3(b1). Figure 3(c) shows the identification results of the samples corresponding to Fig. 3(b). For a better presentation, the bounding boxes surround the targets in Fig. 3(c) have been expanded by six pixels outward from the actual predicted box size, and the yeast cells and PS spheres are marked by blue and magenta boxes, respectively. This clearly indicates that even aggregated cells can be individually identified and accurately located. In Fig. 3(c), the numbers alongside the labels denote the identification scores of the targets, being composed of position scores and category scores. The score signifies the completeness of the target within the bounding box, which is the product of the presence score and the accuracy score of targets within the bounding box. Therefore, a higher score indicates a better confidence and recognition by the system for a given target. Experimentally, all identification scores are around 0.8, meaning that this model can accurately identify and locate the target cells.

    The microfluidic chip preparation and operation can significantly impact the final identification performance, where bubbles and impurities could lead to a false result. To avoid these issues, here special precautions have been taken, including properly scoping the recognition range, enriching the data set, improving the quality of microfluidic chips, etc. In addition, the images captured by the CCD camera should be free from ghosting artifacts to be correctly recognized and detected. Hence, the maximum flow rate in experiments is jointly limited by the acquisition speed of the CCD camera and the inference time of the model.

    Based on the above AI-assisted identification results, the identified yeast cells can be then sorted from the mixture and manipulated into the subchamber by the optical tweezers. The force of optical tweezers usually lies in the piconewton (pN) range; therefore, microfluidics in the channel maintain at a relatively low flow rate (<20μm/s) in the experimental sorting process. The flow rate is controlled by adjusting the air pressure at the inlet and outlet, and the solution in the chip channel flows smoothly and slowly. Figure 4 shows the experimental results of sorting the identified yeast cells (Visualization 2). In Figs. 4(a) and 4(b), the trapped cell inevitably moves upward due to the scattering force. As a result, it appears slightly out of focus. Nevertheless, our model still possesses a favorable adaptability because it has been pretrained with morphological information of numerous cells at different focal planes. Therefore, the trapped yeast cell can still be identified in real time during the manipulation process.

    Experimental results of sorting yeast cells in the mixture. (a)–(c) depict the first set of sorting processes; (d) and (e) represent repeated sorting processes; and (f) is the final status for multiple cell sorting. The magenta boxes indicate PS particles, while the blue ones are yeast cells. The green arrows indicate the cells that have been trapped by the optical tweezers.

    Figure 4.Experimental results of sorting yeast cells in the mixture. (a)–(c) depict the first set of sorting processes; (d) and (e) represent repeated sorting processes; and (f) is the final status for multiple cell sorting. The magenta boxes indicate PS particles, while the blue ones are yeast cells. The green arrows indicate the cells that have been trapped by the optical tweezers.

    Figure 4 illustrates the entire process from cell recognition to capture. The green arrows indicate the cells that have been successfully trapped, and the red dots in Figs. 4(c) and 4(e) represent the moving trajectory of the trapped cell. By cycling the manipulation procedure, multiple cells of interest can be successfully sorted into the designated subchambers. Figure 4(f) shows the multiple sorting results, which verifies the feasibility and stability of our system.

    In experiments, the entire channel undergoes a flow status. Therefore, during the optical sorting process, new cells will come into the view field. This indicates that conducting dynamic sorting is feasible. For future developments, the optical manipulation can be further combined with the automation control program to project the transport tracks of trapped cells. Furthermore, by virtue of the separate chambers, the sorted cells can be used for intensive research, such as cell interactions and cell proliferation.

    4. Conclusion

    In this work, we demonstrated an AI-assisted cell identification and sorting system on a chip. It provides an achievable approach to identify and transport target cells from a mixture solution. The AI-assisted arithmetic model implements accurate identification and localization of the targets successfully. On this basis, as it can simultaneously locate the positions of all targets in the view field, including the “obstacles,” it then guides the sorting process for cells of interest to get around. In this system, the trapping light field is modulated by the SLM, and dynamic concurrent manipulations are possible by generating switchable multiple holographic fields. In addition, by virtue of an adaptive dynamics programming control, an automatic system for cell identification and sorting is also viable. Overall, this work provides a prototype of automatic cell-sorting technique and will create opportunities for subsequent cell proliferation and other analytical works.

    References

    [1] D. Li, N. Wang, T. Zhang, G. Wu, Y. Xiong, Q. Du, Y. Tian, W. Zhao, J. Ye, S. Gu, Y. Lu, D. Jiang, F. Xu. Label-free fiber nanograting sensor for real-time in situ early monitoring of cellular apoptosis. Adv. Photonics, 4, 016001(2022).

    [2] M. Lotfollahi, S. Rybakov, K. Hrovatin, S. Hediyeh-zadeh, C. Talavera-López, A. V. Misharin, F. J. Theis. Biologically informed deep learning to query gene programs in single-cell atlases. Nat. Cell Biol., 25, 337(2023).

    [3] B. Mair, P. M. Aldridge, R. S. Atwal, D. Philpott, M. Zhang, S. N. Masud, M. Labib, A. H. Y. Tong, E. H. Sargent, S. Angers, J. Moffat, S. O. Kelley. High-throughput genome-wide phenotypic screening via immunomagnetic cell sorting. Nat. Biomed. Eng., 3, 796(2019).

    [4] Z. Wang, H. Wang, S. Lin, S. Ahmed, S. Angers, E. H. Sargent, S. O. Kelley. Nanoparticle amplification labeling for high-performance magnetic cell sorting. Nano Lett., 22, 4774(2022).

    [5] J. C. Baret, O. J. Miller, V. Taly, M. Ryckelynck, A. El-Harrak, L. Frenz, C. Rick, M. L. Samuels, J. B. Hutchison, J. J. Agresti, D. R. Link, D. A. Weitz, A. D. Griffiths. Fluorescence-activated droplet sorting (FADS): efficient microfluidic cell sorting based on enzymatic activity. Lab Chip, 9, 1850(2009).

    [6] M. Purschke, N. Rubio, K. D. Held, R. W. Redmond. Phototoxicity of Hoechst 33342 in time-lapse fluorescence microscopy. Photochem. Photobiol. Sci., 9, 1634(2010).

    [7] H. S. Liu, M. S. Jan, C. K. Chou, P. H. Chen, N. J. Ke. Is green fluorescent protein toxic to the living cells. Biochem. Biophys. Res. Commun., 260, 712(1990).

    [8] R. Tang, L. Xia, B. Gutierrez, I. Gagne, A. Munoz, K. Eribez, N. Jagnandan, X. Chen, Z. Zhang, L. Waller, W. Alaynick, S. H. Cho, C. An, Y.-H. Lo. Low-latency label-free image-activated cell sorting using fast deep learning and AI inferencing. Biosens. Bioelectron., 220, 114865(2023).

    [9] K. Lee, S.-E. Kim, J. Doh, K. Kim, W. K. Chung. User-friendly image-activated microfluidic cell sorting technique using an optimized, fast deep learning algorithm. Lab Chip, 21, 1798(2021).

    [10] Z. Diao, L. Kan, Y. Zhao, H. Yang, J. Song, C. Wang, Y. Liu, F. Zhang, T. Xu, R. Chen, Y. Ji, X. Wang, X. Jing, J. Xu, Y. Li, B. Ma. Artificial intelligence-assisted automatic and index-based microbial single-cell sorting system for one-cell-one-tube. mLife, 1, 448(2022).

    [11] S. Emami, V. P. Suciu. Facial recognition using OpenCV. J. Mobile Embedded Distributed Syst., 4, 38(2012).

    [12] É. Zablocki, H. Ben-Younes, P. Pérez, M. Cord. Explainability of deep vision-based autonomous driving systems: review and challenges. Int. J. Comput. Vision, 130, 2425(2022).

    [13] B. J. Erickson, P. Korfiatis, Z. Akkus, T. L. Kline. Machine learning for medical imaging. RadioGraphics, 37, 505(2017).

    [14] J. Zhao, Y. Y. Sun, H. B. Zhu, Z. Y. Zhu, J. E. Antonio-Lopez, R. A. Correa, S. A. Pang, A. Schulzgen. Deep-learning cell imaging through Anderson localizing optical fiber. Adv. Photonics, 1, 066001(2019).

    [15] S. Pan, L. Wang, Y. Ma, G. Zhang, R. Liu, T. Zhang, K. Xiong, S. Chen, J. Zhang, W. Li, S. Yang. Photoacoustic-enabled automatic vascular navigation: accurate and naked-eye real-time visualization of deep-seated vessels. Adv. Photonics, 2, 046001(2023).

    [16] S. Feng, Y. Xiao, W. Yin, Y. Hu, Y. Li, C. Zuo, Q. Chen. Fringe-pattern analysis with ensemble deep learning. Adv. Photonics, 2, 036010(2023).

    [17] T. Luo, L. Fan, Y. Zeng, Y. Liu, S. Chen, Q. Tan, R. H. W. Lam, D. Sun. A simplified sheathless cell separation approach using combined gravitational-sedimentation-based prefocusing and dielectrophoretic separation. Lab Chip, 18, 1521(2018).

    [18] T. Morijiri, M. Yamada, T. Hikida, M. Seki. Microfluidic counterflow centrifugal elutriation system for sedimentation-based cell separation. Microfluid. Nanofluid., 14, 1049(2012).

    [19] A. Bussonniere, Y. Miron, M. Baudoin, O. Bou Matar, M. Grandbois, P. Charette, A. Renaudin. Cell detachment and label-free cell sorting using modulated surface acoustic waves (SAWs) in droplet-based microfluidics. Lab Chip, 14, 3556(2014).

    [20] X. Ding, Z. Peng, S. C. Lin, M. Geri, S. Li, P. Li, Y. Chen, M. Dao, S. Suresh, T. J. Huang. Cell separation using tilted-angle standing surface acoustic waves. Proc. Natl. Acad. Sci. USA, 111, 12992(2014).

    [21] S. Yang, J. Rufo, R. Zhong, J. Rich, Z. Wang, L. P. Lee, T. J. Huang. Acoustic tweezers for high-throughput single-cell analysis. Nat. Protoc., 18, 2441(2023).

    [22] X. Xie, X. Wang, C. Min, H. Ma, Y. Yuan, Z. Zhou, Y. Zhang, J. Bu, X. Yuan. Single-particle trapping and dynamic manipulation with holographic optical surface-wave tweezers. Photonics Res., 10, 166(2022).

    [23] R. Deng, Y. Zhang, X. Wang, X. Xie, Y. Song, J. Bu, C. Min, X. Yuan. In situ intracellular Raman spectroscopic detection with graphene-based thermoelectric optical tweezers. Sensor. Actuat. B-Chem., 361, 131722(2022).

    [24] L. Yu, Y. Jia, X. Hu, S. Wang, H. Chen, S. Liu, H. Deng, M. Wang, J. Yin. Trapping and revolving micron particles by transformable line traps of optical tweezers. Chin. Opt. Lett., 20, 053801(2022).

    [25] A. Neubeck, L. V. Gool. Efficient non-maximum suppression. 18th International Conference on Pattern Recognition, 850(2006).

    [26] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, A. Zisserman. The Pascal visual object classes (VOC) challenge. Int. J. Comput. Vision, 88, 303(2010).

    Ruping Deng, Yuan Song, Jiahao Yang, Changjun Min, Yuquan Zhang, Xiaocong Yuan, Weiwei Liu. AI-assisted cell identification and optical sorting [Invited][J]. Chinese Optics Letters, 2023, 21(11): 110009
    Download Citation