- Research Article
- Open access
- Published:
Multi-resolution analysis enables fidelity-ensured deconvolution for fluorescence microscopy
eLight volume 4, Article number: 14 (2024)
Abstract
Fluorescence microscopic imaging is essentially a convolution process distorted by random noise, limiting critical parameters such as imaging speed, duration, and resolution. Though algorithmic compensation has shown great potential to enhance these pivotal aspects, its fidelity remains questioned. Here we develop a physics-rooted computational resolution extension and denoising method with ensured fidelity. Our approach employs a multi-resolution analysis (MRA) framework to extract the two main characteristics of fluorescence images against noise: across-edge contrast, and along-edge continuity. By constraining the two features in a model-solution framework using framelet and curvelet, we develop MRA deconvolution algorithms, which improve the signal-to-noise ratio (SNR) up to 10 dB higher than spatial derivative based penalties, and can provide up to two-fold fidelity-ensured resolution improvement rather than the artifact-prone Richardson-Lucy inference. We demonstrate our methods can improve the performance of various diffraction-limited and super-resolution microscopies with ensured fidelity, enabling accomplishments of more challenging imaging tasks.
1 Introduction
Fluorescence microscopy lays the foundation of modern optical microscopy in the application of live-cell imaging [1]. The birth of fluorescence super-resolution (SR) microscopy [2,3,4,5] even breaks the diffraction limit of conventional optical microscopy. Although in principle the spatial resolution of SR can reach single-molecule resolution, in practice it is limited by the detectable fluorescent photon flux. To overcome this problem, on the one hand, preference is given to the development of dyes with higher efficiency and photostability, along with more sensitive and faster detection techniques [6,7]. On the other hand, computational image enhancement techniques [8,9,10,11,12,13,14,15,16,17] are highly welcome as they carry the potential to improve the imaging duration and spatiotemporal resolution by recovering the fluorescence signal from the abundant noise with a low photon budget.
Deconvolution is one of the most effective schemes to recover fluorescence signals from optical blurring and noise. Classical inverse problem solvers such as Wiener filtering and Richardson-Lucy (RL) iteration work at moderate noise levels but are feeble for low-SNR images. The regularization-based deconvolution method with noise-robustness shows better performance. Total-variation (TV) [8] and Hessian [9] regularized deconvolutions characterizing the continuity feature were developed for low-SNR image compensation in structured illumination microscopy (SIM), enabling imaging with faster speed and longer duration. Sparse deconvolution [10] based on the image sparsity and Hessian continuity was further proposed to enhance the standard RL iteration, which substantially improves the resolution of SIM. However, limited by naïve assumptions about fluorescence images in conventional algorithms, losses in spatiotemporal details are usually inevitable. Moreover, due to the artifact-proneness feature [18,19], the fidelity of the statistical maximum likelihood estimation (MLE) method such as RL iteration remains doubted. The alternative physics-rooted constrained inverse filtering such as the fast iteration soft-thresholding algorithm (FISTA) [20] is not that effective for fluorescence images due to its fragility to noise and discordance with variance-based regularization.
As an improvement of the conventional data-processing method, the wavelet transform was proposed by mathematicians in the early twentieth century [21], and was first applied in engineering for seismic wave data analysis until 1984 [22]. Later, the theory of multi-resolution analysis (MRA) [23,24,25] that decomposes the signal into a series of resolution levels was proposed, which provides a novel idea for signal space segmentation. The traditional orthogonal wavelet [26,27] is the first developed MRA tool that achieves optimal results in a variety of image-processing tasks [28,29,30,31,32,33]. As the importance of sparsity in signal processing was clarified [34], some redundant multiscale-bases [35,36,37,38,39] that can provide a sparser representation of signals were further developed, which show superior performance [40,41,42,43]. Due to the distinct difference between macro-scale and fluorescence images, the MRA approach for fluorescence images remains an underexplored domain so far.
Here to elevate the fidelity of the algorithmic approach, we propose an MRA approach to characterize two major characteristics of fluorescence images: high contrast across the edge, and high continuity along the edge. MRA outperforms the state-of-the-art variance-based regularization with conspicuously improved signal–noise discrimination ability. More importantly, it seamlessly aligns with the tenets of the physics-rooted iterative inverse filtering, thereby achieving computational-SR results that exhibit a notable similarity and comparable error rates to those obtained through physical means. We further devise sectioning MRA (SecMRA) to deal with heavy background conditions, which shows superior performance than conventional schemes. We demonstrate our methods can accomplish more challenging imaging tasks with ensured fidelity in various fluorescence microscopies, such as discerning ~ 60-nm resolution features with SIM, capturing the dynamics of endoplasmic reticulum (ER), and supporting the long-term observation of organelle interaction.
2 Results
2.1 MRA deconvolution algorithm
Different from traditional variance-based regularizations encapsulating the continuity attributes of fluorescence images (Note S1), we propose a MRA framework for noise-control in the deconvolution process. The major drawback of variance-based regularizations is that they tangle the continuity within the biological structure and its border with the background (Fig. S1). Mathematically, they simply designate components with large spatial derivatives as noise. We instead focus on two cardinal attributes of fluorescence images to differentiate noise: (1) high contrast across the edge, and (2) high continuity along the edge. The heightened specificity of most fluorophores to specific biological structures, as opposed to the surrounding background, engenders the sharp edge within fluorescence images. High continuity is also an important feature because of the connectivity of the biological specimen, and spatial sampling nature of fluorescence microscopes. But this feature only exists within the biological structures, not globally. Consequently, this imparts fluorescence images with abundant anisotropic information, namely continuous structures along edges in different directions. Based on the above analysis, we employ framelet [37] and curvelet [35,36] (Note S2) for noise-control in fluorescence images. Framelet and curvelet transform can effectively detect and code the across-edge contrast and along-edge continuity information into high-value coefficients, respectively. By thresholding the framelet and curvelet coefficients, the noise can be removed while preserving the two important information outlined above.
To verify the above analysis, we examined the framelet and curvelet sparsity of various organelle images at different noise levels. The results show that the noiseless fluorescence images show high sparsity in both the framelet and curvelet domain, which gradually decreases with the increase of noise level (Fig. S2). Increasing the sparsity of the framelet and curvelet coefficients through hard thresholding, the sharp edge and along-edge continuity feature of the fluorescence image can be extracted from the noise contamination (Note S3 and Fig. S3). Based on the above assumptions and verifications, we propose the co-sparsity of the framelet and curvelet coefficients as the regularization for the fluorescence image deconvolution model, termed MRA deconvolution:
where f is the input degenerated image, A denotes the blur kernel in the matrix form, x is the recovered image, W denotes the framelet transform, C denotes the curvelet transform, λ1 and λ2 are two regularization parameters, ||·||1 and ||·||2 notations denote the l1 and l2 norm, respectively. Reducing the value of the first term, also called the fidelity term, recovers high-frequency information. The second and third terms are the sparsity in the framelet and curvelet domain respectively, which mainly controls the noise. FISTA is employed to minimize this optimization problem (Note S4.1). In contrast to mainstream statistical MLE deconvolutions such as Sparse [10] that solves the statistical likelihood argminx -log p ( f | x), we seek the model solution of the fidelity penalty through gradient descent.
To enhance the ability of MRA for deconvolving time-lapse or 3D images with good continuity, we develop a spatiotemporal continuity denoising scheme based on the soft-thresholding of 3D framelet and 3D dual-tree complex wavelet (DTCW) [44,45] coefficients (Note S5), which contribute to the extraction of sharp edge and along-edge continuity feature in the third dimension, respectively. The principle and workflow of the MRA deconvolution algorithm are shown in Fig. 1a, which can be applied to all kinds of fluorescence imaging techniques as a post-processing method.
2.2 MRA reveals superior noise-control performance
We first validate the efficacy of MRA for noise-control in some synthesized geometrics with known ground-truth (GT) (Note S6, Fig. S4). The results show that the MRA deconvolution can faithfully recover these geometrical structures under severe noise. To validate its performance in real fluorescence images, we captured raw SIM images of actin filaments with gradient SNR by tuning illumination power (Fig. 1b). Two parallelly aligned actin filaments can be resolved by the MRA deconvolution even with 3% illumination power, whereas conventional SIM reconstruction can barely resolve it until the illumination power reaches 20% (Fig. 1c, d). The high-SNR reference clearly reveals the fidelity of MRA, which shows the noise-robust feature. Conversely, when resolving this sample conventional regularization faces the over-smoothing problem with the improvement of SNR (Fig. S1). The unique design of across-edge and along-edge information extraction constitutes the advantage of MRA (Fig. S5).
Subsequently, we compared the performance of MRA with state-of-the-art variance-based penalty on time-lapse images with low SNR (Fig. 1e–h). The results show that with equivalent resolution level, the SNR improvement provided by MRA is ~ 10 dB and ~ 5 dB higher than Hessian deconvolution on the lysosome and microtubule time-lapse images, respectively (Fig. 1f, i). Notably, at heightened noise levels, Hessian regularization fails to effectively discriminate between noise and signal, as it only emphasizes continuous information (Fig. S6). MRA excels because important image edge information is well preserved in high-value coefficients. Across various SNR conditions, MRA consistently outperforms Hessian regularization (Fig. S7), and shows noise robustness characteristics (Fig. 1g, j and Videos S1, 2).
With remarkable noise-control proficiency, MRA can assist live-cell imaging with a faster speed and longer duration by compensating for photon-limited fluorescence images. As an example, we demonstrate that MRA can help capture the rapid dynamics of ER [46,47], a large organelle that plays a crucial role in various life activities. Employing a widefield microscope, we acquire the ER tubule time-lapse images (Fig. S8a and Video 3) with a temporal resolution of ~ 417 Hz and duration spanning ~ 10 min. Due to the ultralow SNR, the ER network in the original image is almost invisible and cannot be segmented even by well-trained TWS machine-learning model [48], which can be finely resolved after noise reduction through MRA deconvolution. Consequently, the rapid movement of vesicles and variation of ER network can be finely visualized (Fig. S8b).
2.3 MRA improves fluorescence imaging resolution with ensured fidelity
In MRA, we adopt the model solution to the inverse problem for resolution improvement, which ensures the reduction of fidelity penalty. The alternative statistical solution, such as RL iteration, is more commonly employed in fluorescence imaging due to its MLE nature, rendering it more robust to noise. However, this conversely renders it an inherently imprecise solution to fidelity penalty, which hampers the fidelity and induces artifacts [18,19]. We demonstrate in simulation that RL would not recover the blurred image to GT even with zero-noise, generating considerable artifacts and false high-frequency details (Fig. S9). Conversely, iterative gradient descent to the fidelity penalty attains convergence to GT guaranteed by the convexity of the fidelity term. Fortunately, when processing practical fluorescent images contaminated by noise, our MRA regularization aids the noise-control. MRA finely aligns with the tenets of physics-rooted model solution since it does not constrain global continuity (Fig. S10). Owing to the inherent non-smoothness of high-resolution images, the conventional variance-based penalty is contradictory with the iterative inverse filtering in model solution. Additionally, we introduce a modification to FISTA iteration to accelerate the reduction of the fidelity cost in high-SNR scenarios, which benefits the inference of high-frequency information (Note S4.3, Fig. S11).
With the aforementioned design, we contend that MRA prioritizes fidelity in computational resolution extension, which means truly improving resolution without producing notable artifacts. To assess the fidelity of our MRA and conventional mainstream MLE methods, we employ metrics focusing on two aspects: the reversibility to low-resolution initiations and the resemblance to high-resolution GT. On a confocal mitochondria image, all methods exhibit resolution improvement (Fig. 2a, b). Notably, noticeable artificial structures can be found in MLE methods upon re-blurring (not seen in MRA). Sparse and commercial Huygens deconvolution (Scientific Volume Imaging company) did not resolve the subtle cristae cluster structure, which was finely unveiled after ~ 1.5-fold resolution improvement provided by MRA. Sparse and Huygens deconvolved images possess higher resolution measured by decorrelation analysis [49], which is largely due to the simple amplification of the high-frequency part in the MLE process but lacks fidelity. This is also reflected in the observation that MLE deconvolved images show several times higher resolution-scaled error [50] than MRA that authentically reduces the fidelity penalty (Fig. 2c, d). Another deficit of the MLE method is that it can arbitrarily increase the sharpness. At typical iteration times, MLE methods over-infer structures’ full-width-at-half-maxima (FWHM) far below the theoretically unblurred value (Fig. S12). In contrast, with increasing iterations, MRA’s inference approaches the theoretical value with respect to PSF.
Subsequent evaluations employ low–high resolution reference pairs. The MRA deconvolved widefield actin image achieves ~ 1.4-fold resolution improvement, yielding high similarity (SSIM:0.85) with the SR-SIM result (Fig. 2e). In contrast, MLE deconvolution produces spurious sharpened edges, failing to compute genuine high-frequency information (Fig. 2f, g). Notably, the fidelity metrics of SIM and MRA images are comparable, significantly surpassing those of MLE methods (Fig. 2h). Widefield-SIM pairs assessments on other samples further confirm MLE’s tendency to introduce illusory high-frequency information, while MRA’s fidelity closely resembles physical SR results (Fig. S13). On open-source BioSR dataset [15], we also verify that MRA can resolve ~ 60-nm resolution feature on a linear-SIM image, which is verified in the nonlinear-SIM result (Fig. 2i). Conversely, two MLE counterparts failed to resolve such subtle structures but produced some false high-frequency information that contradicts the nonlinear-SIM result (Fig. 2j). We also examine the performance of the MLE deconvolution with different iteration times. The results show that reducing the iteration number can alleviate the over-sharpening in MLE deconvolution, but it still fails to infer real high-resolution structures as MRA does (Fig. S14).
Incorporating MRA regularization to the MLE-based deconvolution framework also improves its performance, yet is still affected by the deficits of MLE nature (Fig. S15). Noise presence necessitates weighting the regularization term in MRA, which leads to some gaps between the MRA result and the ideal non-diffraction situation. In this case, executing a few RL iterations (1–5 times) post MRA convergence may to some extent mitigates the discrepancy in image contrast and enhance visual perception (Fig. S16). The high-frequency information framework inferred by MRA ensures fidelity, but excessive RL iterations still can damage it.
The resolving power of MRA can be readily applied to capture subtle structures in various imaging modalities and samples. MRA shows impressive resolving ability on the Argo-SIM slide, which accomplishes the separation of parallelly aligned fluorescent lines with a marginal distance of 30 nm (Fig. 3a). On the Nanoruler sample, the resolving capability of MRA is also substantiated, which assists in separating the 70-nm distanced spot pairs under SIM imaging (Fig. S17). An equivalent ~ 70 nm resolution was achieved in actin structures under SIM imaging modality (Fig. 3b–d). MRA can also be utilized to directly compute the SR information on widefield images (Fig. 3e). Some ring structures in the ER network blurred by widefield PSF can be finely resolved post MRA deconvolution, allowing a finer observation of the ER dynamics (Fig. 3f and Video S4). We also demonstrated the utility of MRA to facilitate commercial LiveSR imaging to capture mitochondrial fine structure dynamics (Fig. 3g and Video S5). MRA deconvolution discerns previously obscured mitochondria cristae cluster structures within original LiveSR images, unveiling their detailed dynamic features (Fig. 3h).
2.4 SecMRA deconvolution algorithm
Although MRA shows superior deconvolution performance, it is inherently limited to dealing with fluorescence background since only planar information is known. Traditional solution includes ||x||1 regularization [10,51] and preliminary background subtraction [49]. However, the loss of details and weakness encountering strong background are evitable because of naïve global thresholding and SNR loss in background subtraction (Fig. S18). To address this issue, here we provide a novel scheme by introducing a bias thresholding mechanism to the MRA iteration, termed SecMRA deconvolution (Note S4.2). This method selectively penalizes the background information and controls the noise in each deconvolution iteration, enabling background mitigation with better preserved details. In the deconvolution iteration, the trade-off between background attenuation and MRA contribution is still inherent. To alleviate ultra-strong fluorescence background, we add an optimized framelet-based preliminary background subtraction step (Note S4.2 and Fig. S19), with the whole procedure shown in Fig. 4a.
With its remarkable background inhibition ability, SecMRA deconvolution has extended MRA to wider imaging conditions. We acquired images of mouse kidney cell nuclei using widefield and spinning-disk confocal (SD-confocal) microscopy, which exhibited significant background interference (Fig. 4b). The images show a substantial background reduction with finely preserved details after SecMRA deconvolution. The SecMRA deconvolved widefield image even yields better quality than the original SD-confocal image. The SoRa imaging of the mouse kidney cell actin is also affected by noise and out-of-focus signals, which were effectively removed by SecMRA (Fig. 4c and Video S6). Background significantly limits MRA for high-frequency information extraction, which is finely overcome by SecMRA (Fig. 4d). SecMRA also inherits the high-fidelity feature in resolving high-frequency structures, which provides 1.6-fold and 2.2-fold fidelity-ensured resolution improvement to the SD-confocal and SoRa mitochondria image, respectively (Fig. 4e). The SecMRA deconvolved SD-confocal image exhibits high similarity with SoRa. We further verify the fidelity of SecMRA resolution enhancement in other low–high resolution imaging pairs (Fig. S20). Additionally, we demonstrate the effectiveness of SecMRA in other common setups, including LiveSR (Fig. 4f), SIM (Fig. 4g), STED (Fig. 4h), and light-sheet microscopy [52,53] (Fig. S21).
2.5 SecMRA enables low-toxicity, long-term organelle interaction imaging
As a general algorithm for enhancing fluorescence images, SecMRA has great potential to support a variety of life science research. Here we demonstrate the application of SecMRA on the observation of microtubules-associated and mitochondria-associated organelle interaction, which is crucial for cellular function [47,54]. Due to the photosensitivity of these two organelles, low-toxicity imaging is necessary, which can be greatly supported by SecMRA.
We first focused on microtubule-related interaction and performed two-color widefield imaging of microtubules and lysosomes in COS-7 cells (Fig. 5a and Video S7). SecMRA effectively reduced background, noise, and blurring, allowing SR imaging of microtubule and lysosome images (Fig. 5b, the resolution of lysosome and microtubule is improved by ~ 1.5-fold and ~ 2.2-fold respectively). With the assistance of SecMRA, we can finely observe the interactions of lysosomes and microtubules in a long term. The lysosomes in the region shown in Fig. 5c move within the microtubule framework, while the lysosomes in another region shown in Fig. 5d rapidly move along the microtubule. In another set of data, we also observed lysosomes stayed almost static during the ~ 12-min recording period and were closely bound to microtubules (Fig. S22 and Video S8). We also demonstrate the efficacy of SecMRA in SIM imaging (Fig. 5e and Video S9). The raw SIM image suffered from excessive background and noise that hindered observation. SecMRA effectively reduced the noise and compensated signal loss due to photobleaching (Fig. 5f, g). With the assistance of SecMRA, we observe that some lysosomes gradually shifted from the microtubule accumulation region where depolymerization occurs (Fig. 5h, i). Notably, conventional deconvolution algorithms typically sacrifice low-intensity and high-frequency details when enhancing the densely meshed microtubule image (Fig. S18).
Subsequently, we employed a LiveSR microscope to image the interaction between mitochondria and microtubules. Due to the photosensitivity of both organelles, we utilized low-intensity illumination to ensure their viability (Fig. 6a and Video S10). Nevertheless, the tubulin signal was plagued by noise in the presence of undesired ultrabright regions. With the assistance of the intensity correction mode in SecMRA, we were able to recover the tubulin signal and alleviate undesired intensity distribution by selectively attenuating the ultrabright region, enabling visualization of a mitochondrial fission process around the microtubule (Fig. 6b). Compared with traditional algorithms, this function allows SecMRA to enhance images more flexibly according to certain requirements (Fig. S18). Our method also facilitated the observation of two consecutive mitochondrial fission and fusion events around the microtubules in the LiveSR system (Fig. S23 and Video S11).
Fluorescence imaging near the nucleus region may suffer from severe background due to the dense cellular structure. Microscopes with better optical sectioning are required to accomplish relevant imaging, which conversely induce additional costs and toxicity. Here, we demonstrate that by utilizing the computational sectioning and denoising abilities of SecMRA, interactions between mitochondria and the ER tubules near the nucleus region can be observed in a great precision even using the simplest widefield microscopy. We employed a low illumination intensity to support imaging for more than 30 min without severe photobleaching and phototoxicity, resulting in excessive noise and background that submerged the signal (Fig. 6c, d). SecMRA can finely recover the structural information while conventional algorithms are incompetent (Fig. S18). With the assistance of SecMRA, we observed a mitochondrion division event near the ER network (Fig. 6e and Video S12). We also observed a mitochondrion extended by ~ 5.2 μm, then gradually shrunk and attached to a surrounding ER network (Fig. 6f, g). The MOC value of the mitochondrial distal end with the ER tubules finely describes the mitochondrion movement (Fig. 6h and Fig. S24). During the extension and shrinkage process, peaks and valleys of the MOC curve were formed as the mitochondrion shuttled through the ER network. Then the mitochondrion distal end was bound to the ER network, resulting in a continuously high MOC value. Some deformations of the ER network briefly caused a decrease in MOC values. The mitochondrion then moved to rebind with ER, causing MOC value to increase again.
3 Discussion
Over the past decades, many computational technologies have emerged to improve the quality of fluorescent images and enhance imaging capabilities. However, the presence of artifacts has made achieving computational-SR contentious. In this work, we demonstrate that by reasonably constraining the across-edge contrast and along-edge continuity of fluorescence images, deconvolution via model-solution framework attains assured computational-SR across diverse modalities. In contrast, existing statistical MLE deconvolutions frequently yield artifacts rather than authentic SR information. Our assertion is substantiated by MRA’s resemblance to physical-SR results across various parameters: (1) comparable resolution of intricate structures, (2) high global structural similarity, and (3) equivalent resolution-scaled errors. MRA proves highly effective for subcellular structures featuring pronounced sharp edge and along-edge continuity, allowing sparse representation within the framelet and curvelet domains. The feasibility of these constraints remains even for images with less distinct features, such as electron microscopy images (Fig. S25), as noise generally causes a response in the framelet and curvelet domain sparsity. We further designed SecMRA with a bias thresholding mechanism to address the situation of strong multilayer emitters in various imaging modalities. Both MRA and SecMRA show good linearity in the signal portion (Fig. S26 and Table S1).
We elucidate the influence of parameters through simulations and diverse fluorescence data (Note S7, Figs. S27-32), showing that the MRA parameters mainly adjust the noise-control and deblurring balance. The relative objectivity of the parameters in our pipeline also benefits the fidelity in the deconvolution process (Fig. S33). To facilitate the dissemination of our techniques, we offer MATLAB source code for developers, along with interactive software for users. The imaging conditions and algorithm parameters are provided to ensure the reproducibility (Tables S2-6). Our software automates parameter selection based on estimated noise levels through curvelet sparsity (Fig. S34), and we also provides manual parameter tuning guidance in the user manual document. We analyzed potential MRA failures and artifacts stemming from improper parameter choices or extreme conditions to facilitate the assessment of the deconvolution outcomes (Note S8).
We hope that as a fidelity-ensured deconvolution technique, MRA and SecMRA can help advance biological research. There also exists vast space for MRA to combine with other techniques. The impressive resolution enhancement and background inhibition of SecMRA is extraordinarily beneficial for widefield microscopy, and hopefully can be extended to event-trigger microscopy setups [55,56]. Moreover, combining with some additional physical models of the fluorophores as constraints may further boost the performance of the algorithm [57,58,59]. While the deep learning technique is emerging for image restoration, its generalization capability and uncertainty remain challenging, especially in the context of life science research demanding high fidelity. For fluorescence imaging, where precision and fidelity are paramount, we believe our MRA analytical model holds its unique advantages. Moreover, we believe that introducing our MRA-based prior knowledge to the deep-learning model may further improve its performance, as some recent works indicate the effectiveness of incorporating some analytical models to the deep-learning network [14,16].
4 Methods
4.1 Fluorescence microscopes
We employed various fluorescence microscopes to test the effectiveness of the MRA and SecMRA deconvolution algorithms. The commercial Airy Polar-SIM super-resolution microscopy system (Airy Technologies Co., Ltd, China) was used to capture the widefield and SIM images displayed in Figs. 1–6. The Zeiss confocal laser scanning microscopy LSM 980 with Airyscan 2 (Zeiss, Germany) was used for imaging the mitochondria displayed in Fig. 2a. The Nikon CSU-W1 SoRa spinning-disk microscopy system (Nikon, Japan) was used to capture the widefield, SD-confocal, and SoRa images displayed in Fig. 4b, c and e. Yokogawa spinning disk equipped with a LiveSR super resolution module (Gataca systems, France) was employed to capture the LiveSR images displayed in Figs. 3, 4. Leica SP8 STED 3X microscope (Leica, Germany) was employed to capture the STED image displayed in Fig. 4. A detailed imaging parameter of each image is listed in Table S2.
4.2 Cell maintenance and preparation
COS-7 cells and U2OS cells were cultured in high glucose medium DMEM (Gibco, 11995–040) with the addition of 10% fetal bovine serum (FBS, Gibco, 10099) and 1% penicillin–streptomycin antibiotics (10,000U/mL, Gibco, 15140148), in an incubator at 37 °C with 5% CO2. For live cell imaging experiments, cells were seeded in μ-Slide 8 Well (ibidi, 80827). For fixed-cell imaging experiments, cells were seeded on coverslips (Thorlabs, CG15CH2). The imaging sample was prepared until the cells reached a confluency of 75%.
4.3 Fixed sample
4.3.1 Mouse kidney section sample
The specimens of phallodin-AF568-labeled actin in mouse kidney section are commercially available (FluoCells Prepared Slide #3, Invitrogen, F24630).
4.3.2 BPAEC cell sample
We used commercial FluoCellsâ„¢ Slide #1 (ThermoFisher, F36924) to test the performance of our algorithms. It contains BPAEC cells stained with MitoTracker â„¢ Red CMXRos, Alexa Fluorâ„¢ 488 ghost pen cyclic peptide and DAPI.
4.3.3 Argo-SIM standard slide
In order to verify the resolving power and fidelity of the algorithm, commercial gradually spaced fluorescent lines (Argo-POWER SIM Slide V2, Argolight, France) were employed for SIM imaging. The sample consists of pairs of fluorescent doublets (spacing from 0 to 390 nm, calibrated in the marginal distance manner). The excitation wavelength is 488 nm.
4.3.4 GATTAquant nanorulers
In order to verify the resolving power and fidelity of the algorithm, commercial Nanoruler sample (GATTA-STED 70R, GATTAquant, Germany) was employed for SIM imaging. It contains calibrated fluorescent spots with 70 nm distance. The excitation wavelength is 637 nm.
4.3.5 Labeling actin in fixed U2OS cells
The cell was fixed with 4% formaldehyde (R37814, Invitrogen) for 15 min at room temperature. After washing the sample with PBS, we permeabilized samples with 0.1% Triton™ X-100 (Invitrogen, HFH10) for 15 min. After washing with PBS, we used Alexa Fluor™ 568 Phalloidin (Invitrogen, A12380) / Alexa Fluor™ 488 Phalloidin (Invitrogen, A12379) dye to stain the actin filament for 1h at room temperature. We placed coverslips in a covered container to prevent evaporation during incubation. Then we washed the samples two or more times with PBS and placed the stained coverslips in a dark place to dry naturally. The coverslip was sealed with 30 µL of Prolong (Invitrogen, P36984) mounting medium and placed at 4 °C overnight to air-dry and then observe.
4.3.6 Labeling membrane structures in fixed U2OS cell
To label all lipid membrane structures in the cell, 1Â ug/ml Nile Red (Invitrogen, N1142) was added into the culture medium 30Â min before imaging and was present during imaging.
4.3.7 Centrosome sample for expansion microscopy
The centrosome sample for expansion microscopic imaging was a gift from Jingyan Fu’s laboratory. The associated sample preparation procedure and imaging method  were described previously [60].
4.4 Live-cell sample
4.4.1 Transfection of GFP-KDEL plasmid to mark endoplasmic reticulum
To label the dynamic structure of the ER in living cells, we transfected GFP-KDEL to COS-7 cells with Lipofectamine 3000 (Invitrogen, L3000) transfection reagent. The imaging was performed 36–48 h after transfection.
4.4.2 Labeling mitochondria in living cells
For widefield and SIM imaging: The COS-7 cells were labelled with PKmito RED (Cytoskeleton, CY-SC052)/PKmito DEEP RED (Cytoskeleton, CY-SC055) for 15 min in DMEM. After labeling, we washed the dye 2–3 times with new pre-warmed DMEM before imaging.
For LiveSR imaging: The U2OS cells were labeled with 250 nM MitoTracker Green FM (Invitrogen, M7514) 30 min before imaging.
For STED imaging: The COS-7 cells were labeled with HBmito Crimson (MCE, HY-D2346) [61] at 37°C for 10 min before imaging.
4.4.3 Labeling tubulin in living cells
For widefield and SIM imaging: We used SiR Tubulin Kit (Cytoskeleton, CY-SC002) to label tubulin in live COS-7 cells under the concentration of 1μM. Then we incubated the cells in the incubator with 5% CO2 at 37 °C for 1 h before imaging.
For LiveSR imaging: The tubulin-GFP plasmid was transfected into U2OS cells with Lipofectamine 3000 (Invitrogen, L3000) under the standard protocol.
4.4.4 Labeling lysosome in living cells
We used LysoView™ 488 (Biotium, 70067) to stain the lysosome in COS-7 cells for 15–30 min without washing.
4.5 Pearson correlation coefficient
Pearson correlation coefficient measures the similarity between two images:
where f and g are two images to be compared, μf and μg are their average value, σf, and σg are their standard deviation value.
In this work, we used Pearson correlation to measure the correlation of the raw image with the noisy image and MRA deconvolved image in the simulation presented in Fig. 1. Moreover, we also used the Pearson correlation to quantify the correlation between two organelles.
4.6 Mander’s overlap coefficient
Mander’s overlap coefficient (MOC) is an index that measures the overlap of two organelles. Compared with Pearson correlation, MOC has better interpretability and focuses on absolute colocalization. MOC is calculated as follows:
where g is the gray value of one organelle, ad Mask is the binarized image of another organelle.
For calculation of the MOC between mitochondrial distal end and ER tubules shown in Fig. 6h, we used the TWS machine-learning tool to segment the ER tubules, which generates Mask. Then the distal end region (15 pixels are taken) were used to calculate the MOC value.
4.7 Image decorrelation analysis
We used image decorrelation analysis [49] to estimate the image resolution and evaluate image SNR from the view of frequency domain. After standard edge apodization that mitigates high-frequency artifacts, it calculates the cross-correlation of the image spectrum and its normalized spectrum. Then this process is repeated with the normalized spectrum additionally filtered by a binary mask. The decorrelation curve is expressed as follows:
where \(\overrightarrow {k} { = }\left[ {{\text{k}}_{{\text{x}}} {\text{,k}}_{{\text{y}}} } \right]\) is the frequency-domain coordinate, \({\text{I}}\text{ (}\overrightarrow{\text{k}}\text{)}\) and \({\text{I}}_{\text{n}}\text{ (}\overrightarrow{\text{k}}\text{)}\) are the image Fourier spectrum and normalized spectrum, respectively, \({\text{M}}\text{ (}\overrightarrow{\text{k}}\text{;}{\text{r}}\text{)}\) is the binary mask with a radius of r.
The normalized Fourier spectrum balances the contribution of signal and noise, which is crucial to differ them. As most information is recorded in the low-frequency region due to the band-limited nature of the optical system, the decorrelation curve would firstly increase with the increase of radius r until most signals are included, then it would decrease with increasing radius because noise contribution is larger. The maximum value of the decorrelation value A0 (0 ~ 1) reflects the SNR metrics, which are used to evaluate the image SNR From the view of the Fourier domain.
4.8 NanoJ-SQUIRREL resolution scaled error
The SQUIRREL algorithm [50] was employed to evaluate the resolution-scaled error of a resolution-enhanced image f based on low-resolution image g and resolution-scaling function (RSF). It starts by correcting the lateral mismatch of the two images, then find the optimal parameter to convolve the resolution-enhanced image f back to g:
Then f is convolved back using the optimal parameter: fRS = αf + β, which is used to calculate the resolution-scaled error (RSE) and error map:
4.9 SSIM
We used SSIM to measure the similarity between the convolved back image and the original image to evaluate the fidelity of the deconvolution algorithm as a supplement to the NanoJ-SQUIRREL analysis. The SSIM between two images is calculated as follows:
where f and g are two images to be compared, μf and μg are their average value, σf, and σg are their standard deviation value, σf σg is the covariance of f and g, c1 and c2 are two constants to stabilize the result (c1 = (k1L)2, c2 = (k2L)2, L is the dynamic range of pixel value, k1 = 0.01, k2 = 0.03).
4.10 Calculation of fidelity penalty
The calculation of fidelity penalty essentially is an easy process by summing up the square of the difference between the convolved-back image and the original image. When computing the penalty value of the images obtained by different algorithms, a linearity intensity transformation should be considered since the output result was normalized. Therefore, we searched the minimal fidelity penalty allowing a scaling factor multiplying the deconvolved images, which yield the result shown in Fig. 2d.
4.11 SNR, PSNR, and SBR estimation
In simulation, we calculate the SNR and PSNR metrics using GT as reference:
where f denotes the GT image, g denotes the degenerated image, and i is the bit of the image.
In the simulation shown in Fig. 1D–H, to give a practical instruction of the noise level that MRA can deal, we segmented the signal portion of the image and used its average value as the numerator of the SNR calculation fraction.
Considering that there is no GT image in practical imaging, we calculate the image SNR (dB) using the following commonly used formula:
where Isignal denotes the intensity of the signal, Inoise denotes the intensity of noise, and b denotes the background intensity.
We estimate Isignal by calculating the average intensity in a selected region which is taken as the signal, and estimate Inoise and b by calculating the intensity standard deviation and average value in a selected non-signal region.
The PSNR is calculated as follow:
where i is the bit of the image.
A similar approach is used to calculate SBR:
Data availability
The fluorescence images displayed in the work are publicly available at figshare repository https://figshare.com/articles/Fig./Fluorescence_image_zip/24100263.
Code availability
The MATLAB source code and user-interactive software including a user manual are available with the paper, which can also be downloaded from: https://github.com/YiweiHou/MRA-SecMRA-deconvolution.
References
J.W. Lichtman, J.A. Conchello, Fluorescence microscopy. Nat. Methods 2, 910–919 (2005)
E. Betzig et al., Imaging intracellular fluorescent proteins at nanometer resolution. Science 313, 1642–1645 (2006)
M.G.L. Gustafsson, Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. short communication. J. Microsc. 198, 82–87 (2000)
S.W. Hell, J. Wichmann, Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt. Lett. (1994). https://doi.org/10.1364/OL.19.000780
M.J. Rust, M. Bates, X. Zhuang, Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods 3, 793–796 (2006)
L.L. Wang et al., Hybrid rhodamine fluorophores in the visible/NIR region for biological imaging. Angewandte Chemie-International Edition 58, 14026–14043 (2019)
Q.S. Zheng et al., Ultra-stable organic fluorophores for single-molecule research. Chem. Soc. Rev. 43, 1044–1056 (2014)
K. Chu et al., Image reconstruction for structured-illumination microscopy with low signal level. Opt. Express (2014). https://doi.org/10.1364/OE.22.008687
X. Huang et al., Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy. Nat. Biotechnol. 36, 451–459 (2018)
W. Zhao et al., Sparse deconvolution improves the resolution of live-cell super-resolution fluorescence microscopy. Nat. Biotechnol. 40, 606–617 (2021)
J. Chen et al., Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat. Methods 18, 678–687 (2021)
L. Jin et al., Deep learning enables structured illumination microscopy with low light levels and enhanced speed. Nat. Commu. (2020). https://doi.org/10.1038/s41467-020-15784-x
X. Li et al., Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit. Nat. Biotechnol. (2022). https://doi.org/10.1038/s41587-022-01450-8
Y. Li et al., Incorporating the image formation process into deep learning improves network performance. Nat. Methods 19, 1427–1437 (2022)
C. Qiao et al., Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods 18, 194–202 (2021)
C. Qiao et al., Rationalized deep learning super-resolution microscopy for sustained live imaging of rapid subcellular processes. Nat. Biotechnol. (2022). https://doi.org/10.1038/s41587-022-01471-3
Z. Wang et al., Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning. Nat. Methods 18, 551–556 (2021)
D.L. Snyder, M.I. Miller, The Use of Sieves to Stabilize Images Produced with the EM Algorithm for Emission Tomography. IEEE Trans. Nucl. Sci. 32, 3864–3872 (1985)
White, R.L. in Instrumentation in Astronomy VIII (1994).
A. Beck, M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2, 183–202 (2009)
A. Haar, Zur theorie der orthogonalen funktionensysteme. Math. Annalen (1910). https://doi.org/10.1007/BF01456326
A. Grossmann, J. Morlet, Decomposition of Hardy functions into square integrable wavelets of constant shape. SIAM J. Math. Anal. 15, 723–736 (1984)
J.J. Koenderink, The structure of images. Biol. Cybern. 50, 363–370 (1984)
S.G. Mallat, Multifrequency channel decompositions of images and wavelet models. IEEE Trans. Acoust. Speech Signal Process. 37, 2091–2110 (1989)
S.G. Mallat, A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 11, 674–693 (1989)
I. Daubechies, Orthonormal bases of compactly supported wavelets. Commun. Pure Appl. Math. 41, 909–996 (1988)
I. Daubechies, The wavelet transform, time-frequency localization and signal analysis. IEEE Trans. Inf. Theory 36, 961–1005 (1990)
A. Aldroubi et al., Wavelet Appl. Signal Image Proc. 3169, 389–399 (1997)
M.R. Banham, A.K.J.I.T.O.I.P. Katsaggelos, Spatially adaptive wavelet-based multiscale image restoration. IEEE Trans. Image Proc. 5, 619–634 (1996)
M.A. Figueiredo, R.D. Nowak, IEEE Int. Conf. Image Proc. 2, 782 (2005)
A.S. Lewis, G. Knowles, Image compression using the 2-D wavelet transform. IEEE Trans. Image Process. 1, 244–250 (1992)
J.-L. Starck, A. Bijaoui, Filtering and deconvolution by the wavelet transform. Signal Process. 35, 195–211 (1994)
J.D. Villasenor, B. Belzer, J. Liao, Wavelet filter evaluation for image compression. IEEE Trans. Image Process. 4, 1053–1060 (1995)
S.S. Chen, D.L. Donoho, M. Saunders, Atomic decomposition by basis pursuit. SIAM Rev. 43, 129–159 (2001)
E. Candes, L. Demanet, D. Donoho, L. Ying, simulation fast discrete curvelet transforms. Multiscale Mod. Simul. 5, 861–899 (2006)
E.J. Candès, D. Donoho, New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities. Commun. Pure Appl. Math. 57, 219–266 (2004)
I. Daubechies, B. Han, A. Ron, Z. Shen, Framelets: MRA-based constructions of wavelet frames. Appl. Comput. Harmon. Anal. 14, 1–46 (2003)
M.N. Do, M. Vetterli, The finite ridgelet transform for image representation. IEEE Trans. Image Process. 12, 16–28 (2003)
S. Mallat, G. Peyré, A review of bandlet methods for geometrical image representation. Numer. Algor. 44, 205–234 (2007)
J.S. Bredfeldt et al., Computational segmentation of collagen fibers from second-harmonic generation images of breast cancer. J. Biomed Optics (2014). https://doi.org/10.1117/1.JBO.19.1.016007
B. Goyal, A. Dogra, S. Agrawal, B.S. Sohi, A. Sharma, Image denoising review: From classical to state-of-the-art approaches. Inf Fusion 55, 220–244 (2020)
Y. Liu, S.P. Liu, Z.F. Wang, A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion 24, 147–164 (2015)
J. Cai et al., Framelet based Blind Motion De-blurring from a Single Image. IEEE Trans. Image Process. 21, 562–572 (2012)
Aldroubi, A. et al. in Wavelet Applications in Signal and Image Processing VIII (2000).
M.A. Unser, I.W. Selesnick, A. Aldroubi, K.Y. Li, A.F. Laine, Wavelets Appl. Signal Image Proc. X (2003). https://doi.org/10.1117/12.504896
J. Nixon-Abell et al., Increased spatiotemporal resolution reveals highly dynamic dense tubular matrices in the peripheral ER. Science (2016). https://doi.org/10.1126/science.aaf3928
M.J. Phillips, G.K. Voeltz, Structure and function of ER membrane contact sites with other organelles. Nat. Rev. Mol. Cell Biol. 17, 69–82 (2015)
I. Arganda-Carreras et al., Trainable Weka segmentation: a machine learning tool for microscopy pixel classification. Bioinformatics 33, 2424–2426 (2017)
A. Descloux, K.S. Grußmayer, A. Radenovic, Parameter-free image resolution estimation based on decorrelation analysis. Nat. Methods 16, 918–924 (2019)
S. Culley et al., Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nat. Methods 15, 263–266 (2018)
C. Fang et al., Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy. Nat. Commun. (2021). https://doi.org/10.1038/s41467-020-20329-3
B. Chen et al., Resolution doubling in light-sheet microscopy via oblique plane structured illumination. Nat. Methods 19, 1419–1426 (2022)
C.M. Hobson et al., Practical considerations for quantitative light sheet fluorescence microscopy. Nat. Methods 19, 1538–1549 (2022)
W.A. Prinz, A. Toulmay, T. Balla, The functional universe of membrane contact sites. Nat. Rev. Mol. Cell Biol. 21, 7–24 (2020)
J. Alvelid, M. Damenti, C. Sgattoni, I. Testa, Event-triggered STED imaging. Nat. Methods 19, 1268–1275 (2022)
D. Mahecic et al., Event-driven acquisition for content-enriched microscopy. Nat. Methods 19, 1262–1267 (2022)
P.H.C. Eilers, C. Ruckebusch, Fast and simple super-resolution with single images. Sci. Rep. 12, 11241 (2022)
S. Hugelier et al., Sparse deconvolution of high-density super-resolution images. Sci. Rep. 6, 21413 (2016)
J. Min et al., FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data. Sci. Rep. 4, 4577 (2014)
Y. Tian et al., Superresolution characterization of core centriole architecture. J. Cell Biol. (2021). https://doi.org/10.1083/jcb.202005103
W. Ren et al., Visualization of cristae and mtDNA interactions via STED nanoscopy using a low saturation power probe. Light Sci. Appl. 13, 116 (2024). https://doi.org/10.1038/s41377-024-01463-9
Acknowledgements
This work was supported by the National Key R&D Program of China (2022YFC3401100), and the National Natural Science Foundation of China (62025501, 31971376, 92150301, 62335008). We thank National Center for Protein Sciences at Peking University in Beijing, China, for assistance with LiveSR, SoRa, and Abberior STED-Facility super-resolution imaging. We thank Abberior China for providing Huygens software. We thank Prof. Jingyan Fu at China Agricultural University for providing the centrosome sample.
Author information
Authors and Affiliations
Contributions
Y. Hou and P. Xi conceived the idea of multi-resolution analysis-based fluorescence deconvolution algorithm. P. Xi and M. Li supervised the research. Y. Hou developed the mathematical framework, wrote the code and composed all the figures and videos. M. Li and Y. Hou devised the biological experiments. W. Wang, Y. Fu and X. Ge conducted the biological experiments. Y. Hou, P. Xi and M. Li wrote the manuscript with inputs from all the authors. All authors discussed the results presented in the manuscript.
Corresponding authors
Ethics declarations
Competing interests
P. Xi and Y. Hou are inventors on a filed patent application related to this work (ZL202211421992.6). The other authors declare no competing interests.
Supplementary Information
Additional file 2.
Additional file 3.
Additional file 4.
Additional file 5.
Additional file 6.
Additional file 7.
Additional file 8.
Additional file 9.
Additional file 10.
Additional file 11.
Additional file 12.
Additional file 13.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hou, Y., Wang, W., Fu, Y. et al. Multi-resolution analysis enables fidelity-ensured deconvolution for fluorescence microscopy. eLight 4, 14 (2024). https://doi.org/10.1186/s43593-024-00073-7
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1186/s43593-024-00073-7