## Superresolution imaging in sharper focus

An optical microscope cannot distinguish objects separated by less than half the wavelength of light. Superresolution techniques have broken this “diffraction limit” and provided exciting new insights into cell biology. Still, such techniques hit a limit at a resolution of about 10 nm. Balzarotti *et al.* describe another way of localizing single molecules called MINFLUX (see the Perspective by Xiao and Ha). As in photoactivated localization microscopy and stochastic optical reconstruction microscopy, fluorophores are stochastically switched on and off, but the emitter is located using an excitation beam that is doughnut-shaped, as in stimulated emission depletion. Finding the point where emission is minimal reduces the number of photons needed to localize an emitter. MINFLUX attained ∼1-nanometer precision, and, in single-particle tracking, achieved a 100-fold enhancement in temporal resolution.

## Abstract

We introduce MINFLUX, a concept for localizing photon emitters in space. By probing the emitter with a local intensity minimum of excitation light, MINFLUX minimizes the fluorescence photons needed for high localization precision. In our experiments, 22 times fewer fluorescence photons are required as compared to popular centroid localization. In superresolution microscopy, MINFLUX attained ~1-nm precision, resolving molecules only 6 nanometers apart. MINFLUX tracking of single fluorescent proteins increased the temporal resolution and the number of localizations per trace by a factor of 100, as demonstrated with diffusing 30*S* ribosomal subunits in living *Escherichia coli*. As conceptual limits have not been reached, we expect this localization modality to break new ground for observing the dynamics, distribution, and structure of macromolecules in living cells and beyond.

Superresolution fluorescence microscopy or nanoscopy methods, such as those called stimulated emission depletion (STED) (*1*, *2*) and photoactivated localization microscopy (PALM) or stochastic optical reconstruction microscopy (STORM) (*3*–*5*), are influencing modern biology because they can discern fluorescent molecules or features that are closer together than half the wavelength of light. Despite their different acronyms, all these methods ultimately distinguish densely packed features or molecules in the same way: Only one of them is allowed to fluoresce, whereas its neighbors have to remain silent (*6*). Although this sequential on-and-off switching of molecular fluorescence is highly effective at making neighboring molecules discernible, it does not provide their location in space, which is the second requirement for obtaining a superresolution image. In this regard, these methods strongly depart from each other, broadly falling into two categories.

In the so-called coordinate-targeted versions (*6*), which most prominently include STED microscopy, the position of the emitting molecules is established by illuminating the sample with a pattern of light featuring points of ideally zero intensity, such as a doughnut-shaped spot or a standing wave. The intensity and the wavelength of the pattern are adjusted such that molecular fluorescence is switched off (or on) everywhere—except at the minima where this process cannot happen. As it is “injected” by the incoming pattern, the emitter position is always known through the device controlling the position of the minima. In contrast, the coordinate-stochastic superresolution modality PALM/STORM switches on (and off) the molecules individually and randomly in space, and this implies that the molecular position is established subsequently, by using emitted rather than injected photons. The emitter position is estimated from the centroid of the fluorescence diffraction pattern produced by the emitter on a camera (*7*). This process, called “localization,” can reach a precision given by the standard deviation of the diffraction fluorescence pattern (σ_{PSF} ≈ 100 nm) divided by , with *N* being the number of detected photons (*8*–*11*). Although *N* = 400 should yield precisions of σ ≈ 5 nm, obtaining these limits is commonly challenged by other factors, such as the typically unknown orientation of the fluorophore emission dipole (*12*, *13*).

Camera-based localization is also the method of choice for tracking individual molecules (*14*–*16*). Here, the sum of molecular emissions determines the track length, whereas the emission rate determines the spatiotemporal resolution. Unfortunately, large emission rates reduce the track length by exacerbating bleaching. Alternatively, the molecule can be localized with scanning confocal arrangements (*17*), but this also needs large photon numbers (*N*). Therefore, improving localization has so far concentrated on increasing the molecular emission budget, particularly through antibleaching agents (*18*), special fluorophores (*19*), cryogenic conditions (*20*), transient (fluorogenic) labels (*21*, *22*), and fluorophore-metal interactions (*23*). However, all these remedies entail major restrictions. Moreover, none of them have addressed the problem of maximizing the localization precision with the limited emission budget.

Here, we introduce MINFLUX, a concept for establishing the coordinates of a molecule with (minimal) emission fluxes, originating from a local excitation minimum. Compared with centroid-based localization, MINFLUX attains nanoscale precision with a much smaller number of detected photons, *N*, and records molecular trajectories with >100 times as high temporal resolution (*24*). Moreover, our concept is surprisingly simple and can be realized in both scanning beam and standing-wave microscopy arrangements.

## Basic concept

In a background-free STED fluorescence microscope with true molecular (1-nm) resolution, detecting a single photon from the position of the doughnut zero is enough to identify a molecule at that coordinate (*25*). Detecting more than one photon is redundant. Consider a gedanken experiment in which we seek to establish the trajectory of a molecule diffusing in space. Instead of using uniform wide-field excitation and a camera, we now excite with a reasonably bright focal doughnut that can be moved rapidly throughout the focal plane. If we now managed to target the zero of the doughnut-shaped excitation beam exactly at the molecule, steering it so that it is constantly overlapped with the molecule in space, the doughnut-targeting device would map the molecular trajectory without eliciting a single emission. Note that a single emission (e.g., due to a minimal misplacement) would be enough to let us know that the molecule is not at the location of the doughnut zero. Unfortunately, it is impossible for us to place the doughnut zero right at the molecular coordinate in a single shot, which is why perfect localization without emissions can be performed only by a supernatural being, a demon who knows the position of the molecule in advance. Yet, this gedanken experiment suggests that approaching the molecular position by targeting the zero of the excitation doughnut to the molecule should reduce the number of detected photons required for localization. This is because the position of the doughnut zero is well known, and the resulting fluorescence indicates the residual distance of the molecule to the zero. Hence, apart from confirming the presence of the molecule, the resulting fluorescence carries information about the molecule’s location. Actually, the fluorescence can be seen as the price to be paid for not matching the molecular position with the position of the zero, which also implies that the smaller the mismatch is, the fewer fluorescence photons are needed for localization.

Therefore, in our realization of MINFLUX (*26*, *27*), the location of the molecule is probed with a deep intensity minimum and the fluorescence emissions reveal the position of the molecule. Clearly, this strategy entails a favorable fluorescence photon economy: The approximate position is injected by the excitation photons abundantly available from the light source (*25*), whereas the precious emitted photons are used just for fine-tuning.

MINFLUX can be implemented with many types of light patterns, including standing waves which, after localizing in one dimension (1D), can be rotated to localize in other directions, too. Nonetheless, some key characteristics of MINFLUX hold for any type of pattern. To derive them, we now assume an arbitrary 1D intensity pattern *I*(*x*) with *I*(0) = 0. This could be a standing wave (Fig. 1A) of wavelength λ, but we explicitly make no restrictions as to the pattern shape. Let us first probe the location *x _{m}* of a molecule, ignoring photon statistics. If the pattern is moved, such that the zero sweeps over the probing range containing the molecule –

*L*/2 <

*x*<

*L*/2, the molecular fluorescence

*f*(

*x*) =

*CI*(

*x*–

_{m}*x*) vanishes at

*x*.

_{m}*C*is a prefactor that is proportional to the molecular brightness and the detection sensitivity, as well as to a parameter describing the molecular orientation in space. The solution

*x*is now easily obtained by solving

_{m}*f*(

*x*) = 0.

_{m}Because *C* is a prefactor, the molecular orientation has no influence on the solution. This contrasts with camera-based localization, where unidentified molecular orientations can induce systematic errors in the tens of nanometers range (*12*, *13*). Moreover, because *I*(*x*) is known or can be determined experimentally, two probing measurements with the zeros of *I*(*x*) placed around the molecule are sufficient for establishing *x _{m}* (Fig. 1B). Clearly, this also holds for the two “end points” of the

*L*-sized probing range, where the signal is given by

*f*

_{0}=

*CI*(

*x*+

_{m}*L*/2) =

*CI*

_{0}(

*x*) and

_{m}*f*

_{1}=

*CI*(

*x*–

_{m}*L*/2) =

*CI*

_{1}(

*x*); note that we have redefined the two displaced intensity functions with the subscripts 0 and 1. If

_{m}*L*is so small that

*f*(

*x*) can be approximated quadratically around

*x*, any dependence on λ disappears.

_{m}*f*(

*x*) =

_{m}*C*(

*x*–

_{m}*x*

^{2}) = 0 then yields the solution (see supplementary text S3). Thus, for small distances between the zero and the molecular position (

*L*<< λ/π), the solution

*x*does not depend on the wavelength creating the light pattern, nor does

_{m}*x*depend on the fluorescence emission wavelength, because the emitted photons are just collected. Therefore, in the quadratic approximation, the solution of the molecular position

_{m}*x*does not depend on any wavelength.

_{m}In practice, *f*_{0} and *f*_{1} are the averages of the acquired photon counts *n*_{0} and *n*_{1} obeying Poissonian statistics, which needs to be considered. Hence, *x _{m}* is actually the expected value of the localization with the individual measurements fluctuating around this value. The conditional probability distribution of photons

*P*(

*n*

_{0},

*n*

_{1}|

*N*) follows a binomial distribution

*P*(

*n*

_{0},

*n*

_{1}|

*N*) ≈ Binomial(

*p*

_{0},

*N*), where

*p*

_{0}is the probability of assigning a photon to the first probing measurement

*I*

_{0}. This success probability is given by

*p*

_{0}(

*x*) =

*f*

_{0}(

*x*)/[

*f*

_{0}(

*x*) +

*f*

_{1}(

*x*)] =

*I*

_{0}(

*x*)/[

*I*

_{0}(

*x*) +

*I*

_{1}(

*x*)], considering the dependence on both

*I*(

*x*) and

*L*. We calculated

*p*

_{0}(

*x*) for three distances

*L*= 50, 100, and 150 nm of a standing wave of λ = 640 nm, showing that, between

*x*= –

*L*/2 and

*x*=

*L*/2, it steeply spans the whole range between zero and unity (Fig. 1C). With decreasing

*L*, the steepness increases, and in the quadratic approximation, we have .

The position of the emitter *x _{m}* can be estimated by using a maximum likelihood approach. The maximum likelihood estimator (MLE) of is such that , where is the MLE of the success probability

*p*

_{0}(

*x*). Thus,

_{m}*p*

_{0}(

*x*) maps the statistics of

*n*

_{0}and

*n*

_{1}into the position estimation, giving the distribution of the position estimator . The smaller

*L*is, the more sharply distributed is (Fig. 1C). Statistical modeling of MINFLUX allows us to calculate the Fisher information of the emitter position and its Cramér-Rao bound (CRB) (see supplementary text S1), which determines the best localization precision attainable with any unbiased estimator (Fig. 1D). For the quadratic approximation, the CRB is given by σ

_{CRB}[(2

*x*/

*L*)

^{2}+ 1] (see Eq. S22b). Unlike camera-based localization, in which the precision is homogeneous throughout the field of view, here, it reaches a minimal value σ

_{CRB}(0) = (Fig. 1D) at the center of the probing range. Note that, for example, two measurements with the zero targeted to coordinates within a distance

*L*= 50 nm localize a molecule with ≤2.5 nm precision using merely 100 detected photons.

Analytical expressions of *p*_{0}(*x*) and σ_{CRB}(*x*) are equally well derived for doughnut beams and other types of patterns, as well as extended in 2D (fig. S1 and supplementary text S2). In fact, a doughnut excitation beam displays similar mathematical behavior around its minimum as a standing wave but provides 2D information. Moreover, it can be combined with confocal detection for background suppression. Hence, we decided to explore the MINFLUX concept in a scanning confocal arrangement featuring a doughnut-shaped excitation beam, similarly to our gedanken experiment (Fig. 2A). Moving the doughnut across a large sample area (approximately 20 by 20 μm^{2}) was realized by piezoelectric beam deflection, whereas fine positioning was performed electro-optically (see fig. S13 and Materials and Methods). The latter allowed us to set the doughnut zero within <5 μs with <<1-nm precision to arbitrary coordinates, concomitantly defining the distance *L* (Fig. 2B).

Two-dimensional MINFLUX localization requires at least three positions,, and of the doughnut zero, preferably arranged as an equilateral triangle (Fig. 2B). Considerations and simulations show that adding a fourth doughnut position right at the triangle center,, helps remove ambiguities in the position estimation of the molecule (see fig. S2 and supplementary text S2). Thus, a set of four emitted photon counts *n*_{0}, *n*_{1}, *n*_{2}, and *n*_{3} corresponding to the four positions of the doughnut yields the molecular location (*x _{m}*,

*y*) within an approximate range of diameter

_{m}*L*, referred to as the field of view (Fig. 2B). As we can move and zoom the field of view quickly, our setup entails three basic modes of operation: (i) fluorescence nanoscopy (Fig. 2C); (ii) short-range tracking of individual emitters that move within the field of view (Fig. 2D); and (iii) long-range tracking and nanoscopy in areas of a few microns squared, where the field of view is shifted in space in order to cover the large areas (Fig. 2E).

The success probability , which maps the statistics of *n*_{0}, *n*_{1}, *n*_{2}, and *n*_{3} into the position estimation, is now a multivariate function as is the CRB of the estimator (see fig. S3 and supplementary text S2). As in the 1D case, the CRB scales linearly with *L* at the origin, and the dependence on λ vanishes with increasing validity of the quadratic approximation. We used two types of position estimators in our experiments. The MLE is used for imaging, because its precision was found to converge to the CRB for photons. If *N* < 100, as is the case for quick position estimation in tracking, a modified least mean square estimator (mLMSE) is more suitable and can be implemented directly in the electronics hardware. Because the mLMSE is biased, the recorded trajectories are corrected afterwards by using a numerically unbiased mLMSE (numLMSE) (see fig. S9 and supplementary text S3).

## Localization precision, nanoscopy, and molecular tracking

To investigate the localization precision of MINFLUX, we repeatedly localized a single fluorescent emitter at different positions throughout the field of view. We used an ATTO 647N molecule in the reducing and oxidizing system (ROXS) buffer (*18*) and divided the field of view into an array of 35 × 35 pixels separated by 3 nm in both directions. The excitation intensity and pixel dwell time were chosen such that each pixel contained counts on average. A stack of ~6000 arrays allowed us to perform an MLE- and numLMSE-based MINFLUX localization on each pixel by using varying subsets of *N* photons. Repeating this procedure with different *N*-sized subsets and comparing each result with the pixel coordinate provided the localization precision at that pixel as a function of *N* (Fig. 3A and fig. S8). At the center of a field of view in which *L* = 100 nm, 500 photons were sufficient for obtaining 2-nm precision (Fig. 3, A to D). Note that localization precision and localization error can be considered equivalent, as the bias (accuracy) of the position estimations is negligible. Generally, the precision obtained with MINFLUX is higher than that achievable by a camera (Fig. 3, D to E). The measurements also confirm the inverse–square-root dependence on *N* (Fig. 3E). Throughout the field of view, the precision obtained with MINFLUX agrees very well with the CRB (Fig. 3, D and E), indicating that photon information has indeed been used optimally.

To investigate the resolution obtainable with MINFLUX nanoscopy, we set out to discern fluorophores on immobilized, labeled DNA origamis (*28*) featuring distances of 11 nm and 6 nm from each other (Fig. 4). After identifying an origami by wide-field microscopy, we moved it as close as possible to the center of the field of view . As fluorophores, we used Alexa Fluor 647 which, in conjunction with a suitable chemical environment (*29*), λ = 405 nm illumination for on-switching, and λ = 642 nm excitation light, provided the on-off switching rates needed for keeping predominantly all but one molecule nonfluorescent. Imaging was performed by identifying the position of each emitting molecule as it emerged stochastically within the field of view. We used *L* = 70 nm and *L* = 50 nm for the 11-nm and the 6-nm origami, respectively. By applying a hidden Markov model (HMM) (see Materials and Methods) to the fluorescence emission trace, we discriminated the recurrent single-molecule emissions from multiple molecule events and from the background. Recording *n*_{0}, *n*_{1}, *n*_{2}, and *n*_{3} for each burst and applying MINFLUX on those with and ≥ 1000 for the 11-nm and the 6-nm origami, respectively, allowed us to assemble a map of localizations yielding nanoscale resolution images (Fig. 4). The measurement time duration was 50 s and ~2 min for the 11-nm and the 6-nm origami, respectively. Although the individual molecules emerged very clearly, we further applied a *k*-means cluster analysis to classify the localization events into nanodomains representing fully discerned molecules at 11-nm and 6-nm distances. MINFLUX clearly resolves the molecules at 6 nm distance with 100% modulation (Fig. 4N) proving that true molecular-scale resolution has been reached at room temperature.

We also made a rigorous comparison of MINFLUX nanoscopy with PALM/STORM. For PALM/STORM, we considered a noise-free ideal camera, so as to obtain optimal performance irrespective of camera characteristics, such as dark and gain-dependent noise. To this end, we redistributed the photon counts of each emission event of our MINFLUX images, so that each one comprised *N* = 500 or 1000 counts for the 11-nm and 6-nm origami, respectively. For each nanodomain, the spread (covariance) of the localizations was calculated and displayed as a bivariate Gaussian distribution centered on each nanodomain (Fig. 4, G and L). For PALM/STORM, we also considered *N* = 500 and 1000 photons per measured localization point for the larger and smaller origami, respectively. We then calculated an ideal PALM/STORM image using the CRB of camera-based localization under the conditions that the camera has no readout noise and that the signal-to-background ratio (SBR_{c}) is 500. For the 11-nm origami, we obtained a localization precision of σ = 5.4 nm by PALM/STORM and an average σ of 2.1 nm for MINFLUX (see supplementary text S4). For the 6-nm origami, the corresponding values were σ = 3.8 nm for PALM/STORM and just σ = 1.2 nm for the average MINFLUX precision. Although the CRB-based PALM/STORM images represent ideal recordings, the MINFLUX data may still contain influences by sample drift and other experimental imperfections, implying that further improvements are possible. The comparison actually shows that for the same low number of detected photons, MINFLUX nanoscopy clearly resolves the individual molecules, unlike PALM/STORM.

We note that a substantial fraction of Alexa Fluor 647 molecules can yield more than 500 or 1000 detected photons per emission cycle and, provided that these molecules are fortuitously used, the localization precision in PALM/STORM can be higher, at least in principle. In practice, however, attaining approximately 1- to 2-nm precision has been precluded because collecting high photon numbers is associated with extended recording times and, hence, with sample drift. Moreover, reconstructing superresolution images just with molecules providing large photon numbers implies that poorer emitters are discarded, which compromises image faithfulness. In any case, the fact that MINFLUX requires much fewer detected photons should open the door for using switchable fluorophores providing fewer fluorescence photons.

Next, we tracked single 30*S* ribosomal subunit proteins fused to the photoconvertible fluorescent protein mEos2 (*30*) in living *Escherichia coli* (Fig. 5A). MINFLUX tracking became possible after ensuring that (i) the switched-on molecules were in the field of view, (ii) the four-doughnut measurement was carried out so fast that it was hardly blurred by motion, and (iii) the molecular position was estimated so quickly that repositioning the field of view kept the molecule largely centered. In addition, the tracking algorithm had to be robust against losing the molecule by blinking (irregular mEos2 on and off intermittencies of 2.2-ms and 0.6-ms average duration, respectively—see fig. S12E and Materials and Methods). These hurdles were overcome by implementing position estimation and decision-making routines in hardware (Fig. 2A) (see Materials and Methods) which, together with our electro-optical and piezoelectric beam steering devices, provided an ~7 microseconds response time across a micrometer in an overall observation area of several tens of microns (Fig. 2E). The localization frequency of MINFLUX was set to 8 kHz and the mLMS and numLMS position estimators were used in the live and postrecording stages, respectively.

A collection of 1535 single molecule tracks was recorded from 27 living *E. coli* cells. Typical measured trajectories (Fig. 5, B to E) show that the central doughnut produces a lower count rate. This indicates that a single molecule is well centered while tracking. The reconstructed trajectories are constituted of approximately millisecond traces (Fig. 5C), as the localization procedure is repeatedly interrupted by blinking of the fluorescent probe. The on and off states were identified by applying an HMM to the total collected photons per time interval (see Materials and Methods), and thus the valid localizations could be discriminated.

For each trajectory, the apparent diffusion coefficient *D* and the localization precision σ were estimated for sliding windows of 35 ms. Both parameters were obtained from optimal least squares fits (OLSF) of the mean square displacement (MSD) (see supplementary text S5). The time dependence of *D* (Fig. 5D) reveals transient behavioral changes with 35 ms temporal resolution, which is unprecedented for these kinds of fluorescent probes. It is worth noting that each point of this curve uses more than 100 valid localizations, which greatly surpasses the typical trajectory length (see table S2) of classical camera tracking with single fluorescent proteins.

Plotting the mean localization precision (σ) against the mean number of photons per localization (*N*) (Fig. 5H) proves that the photon efficiency of MINFLUX tracking is 5 to 10 times that of its camera-based counterpart (even for an ideal detector with typical background levels) (see fig. S6 and supplementary text S4). A mean localization precision of <48 nm was obtained by detecting, on average, just nine photons per localization with a time resolution (Δ*t*) of 125 μs. MINFLUX tracking was primarily limited by the blinking of mEos2, as it prevents the molecule from being tightly followed by the center of the beam pattern, where photon efficiency is the highest. A nonblinking probe would then be tracked more closely to the center and that would allow for a smaller pattern size *L* and would further reduce the average tracking error.

Any method that tracks a finite photon-budget probe will suffer from a tradeoff between the number of localizations in a track *S* and the spatial resolution σ. Our MINFLUX tracking experiments have been tuned in favor of high numbers of localizations, because it has been shown to be the best strategy for the measurement of *D* (*31*). This can be appreciated in the contour levels of the relative CRB of *D*, (Fig. 5I), as a function of the number of localizations *S* and the so-called reduced squared localization precision *X* = σ^{2}/*D*Δ*t* – 2*R* [where *R* is a blurring coefficient, (*32*)]. The latter can be thought of as the squared localization precision in units of the diffusion length within the integration time. In this *X*-*S* plane, a scatter plot represents each measured trajectory (red), using average values per track. The average trajectory length was 157 ms with 742 valid localizations (which represents an ~100-fold improvement) (see table S2), with a photon budget of ~5800 collected photons. Thereby, half of the obtained MINFLUX tracks show values below 23% (*S* > 500) (Fig. 5I, inset). MINFLUX tracking can measure apparent diffusion coefficients with precisions <20%, whereas camera-based implementations (Fig. 5I, gray ellipse) center around 70%.

## Discussion and outlook

Among the reasons why MINFLUX excels over centroid-based localization is that, in the latter, the origin of any detected photon has a spatial uncertainty given by the diffraction limit; in MINFLUX each detected photon is associated with an uncertainty given by the size *L*. Hence, adjusting *L* below the diffraction limit renders the emitted photons more informative. A perfect example is origami imaging (Fig. 4), where adjusting *L* from 70 nm to 50 nm improves the localization precision substantially. However, making *L* smaller must not be confused with exploiting external a priori information about molecular positions; no Bayesian estimation approach is needed. MINFLUX typically starts at the diffraction limit, but as soon as some position information is gained, *L* can be reduced and the uncertainty range “zoomed in.” Reducing L makes the detected photons continually more informative. Therefore, we can also regard MINFLUX as an acronym for maximally informative luminescence excitation probing. Although we have not really exploited the zooming-in option here, decreasing *L* repeatedly during the localization procedure will further augment the power of MINFLUX, while also eliminating the anisotropies prevalent at large *L* (Figs. 3C and 4, G and L). Iterative MINFLUX variants bear enormous potential for investigating macromolecules or interacting macromolecular complexes, potentially rivaling current Förster resonance energy transfer (*33*) and camera localization–based approaches to structural biology (*34*).

As in any other concept targeting sample coordinates with intensity minima, the practical limits of MINFLUX will be set by background and aberrations blurring the intensity zero of *I*(*x*). In our experiments, the doughnut minimum amounted to <0.2% of the doughnut crest (fig. S7). Regarding aberration corrections, in camera-based localization one has to correct a faint single molecule emission wavefront containing a few tens or hundreds photons of broad spectral range (100 to 200 nm). In MINFLUX, the corrections are applied to the bright and highly monochromatic (laser) wavefront that produces *I*(*x*); this makes the application of spatial light modulators straightforward. Moreover, the correction has to be optimized for the *L*-defined range only. This brings about the important advantage that, in iterative MINFLUX implementations, it is sufficient to compensate for aberrations in the last (smallest *L*) iteration step, where their effect is minimal.

Spatial wavefront modulators can also be used to target the coordinates with patterns *I _{i}*(

*x*) of varying shape and intensity, which is another degree of freedom for engineering the field of view toward uniform localization precision and for adapting the field of view toward the molecular motion. Because we have already achieved molecular-scale resolution with the standard fluorophores, the new frontiers of MINFLUX will not be given by the resolution values but by the number of photons needed to attain that (single-nanometer digit) resolution. Conversely, we can expect MINFLUX to enable tracking and nanoscopy of fluorophores that provide many fewer photons, including auto- and other types of luminophores.

A fundamental difference between MINFLUX and STED nanoscopy is that, in the latter, the doughnut pattern simultaneously performs both the localization and the on-off state transition. Creating on-off state disparities between two neighboring points requires intensity differences that are large enough to create the off (or on) state with certainty. Because in MINFLUX nanoscopy the doughnut is used just for localization, such definite (i.e., saturated) transitions are not required.

Given that probing with an intensity maximum and solving for max[*f* = *CI*(*x*)] is equally possible, it is interesting to ask whether the same localization precision can be achieved with an excitation maximum. The answer is no, because at a local emission maximum, small displacements of the emitter will not induce detection changes of similar significance for a small distance *L* (fig. S4).

In MINFLUX nanoscopy and tracking, it will also be possible to accommodate multiple fields of view in parallel by using arrays of minima provided by many doughnuts or standing waves. Further expansions of our work include multicolor, 3D localization [e.g., by using a z-doughnut (*2*)] and discerning emission spectra, polarization, or lifetime. Besides providing isotropic molecular resolution, such expansions should enable observation of inter- and intraprotein dynamics at their characteristic time scales. MINFLUX can also be implemented in setups featuring light sheet illumination (*35*), optical tweezers (*36*), and anti-Brownian electrokinetic trapping (*37*). In fact, MINFLUX should become the method of choice in virtually all experiments that localize single molecules and are limited by photon budgets or slow recording, such as the method called PAINT (*22*). Because it keeps or even relaxes the requirements for sample mounting, our concept should be widely applicable not only in the life sciences but also in other areas where superresolution and molecular tracking bear strong potential.

Finally, it is worthwhile noting that MINFLUX nanoscopy has attained the resolution scale ( nm) where fluorescence molecules start to interact with each other—the ultimate limit attainable with fluorophores. Although fluorescence on-off switching remains the cornerstone for breaking the diffraction barrier, in MINFLUX this breaking is augmented because, for small distances between a molecule and the intensity zero, the emitter localization does not depend on any wavelength. A consequence of this finding is that superresolution microscopy should also be expandable to low numerical aperture lenses, wavelengths outside the visible spectrum, and hitherto inapplicable luminophores. More staggering, however, is the implication that focusing by itself is becoming obsolete, meaning that it should be possible to design microscopy modalities with molecular (1-nm) resolution without using a single lens.

## Supplementary Materials

www.sciencemag.org/content/355/6325/606/suppl/DC1

Materials and Methods

Supplementary Text

Figs. S1 to S14

Tables S1 and S2

## References and Notes

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
**Acknowledgments:**We thank F. Persson for early discussions about implementations of the concept and initial tracking experiments. E. D’Este and S. J. Sahl are acknowledged for critical reading. K.C.G. and F.D.S. thank the Cusanuswerk for a stipend and the Max Planck Society for a partner group grant, respectively. A.H.G. and J.E. acknowledge the European Research Council and the Knut and Alice Wallenberg Foundation for funding. S.W.H. is inventor on patent applications WO 2013/072273 and WO 2015/097000 submitted by the Max Planck Society that cover basic principles and arrangements of MINFLUX. Further patent applications with F.B., Y.E., K.C.G., and S.W.H. as inventors have been submitted by the Max Planck Society, covering selected embodiments and procedures. S.W.H. consults and owns shares of Abberior Instruments GmbH, a manufacturer of superresolution microscopes.