top of page
Daniel Knop

Microscope objectives – what is the numerical aperture?

What is the n.A. value of a microscope objective? It is the numerical aperture, of course, the value is engraved on the outside of the objective. But what exactly is it? It's important to know, because it determines the detail reproduction of our objectives. And basically it's quite simple ...


Numerous microscope objectives lie on a white background, the n.A. value printed on each is circled in red in the photo
Every microscope objective has a value for its numerical aperture – what is it?

The numerical aperture of a microscope objective determines how small structures can be in order to be displayed by the lenses. And this is related to the diffraction of light radiation.


First a bit of physics. First of all: some details are presented here in a very simplified way in order to make the basics easier to understand, because that's exactly what I was aiming for. In reality, of course, everything is much, much more complicated – just like in real life.


Diffraction of light radiation – what is that? What follows is a very simplified comparison, but it makes the physical process behind it understandable. Imagine you are standing in front of a wooden wall in the garden, holding a garden hose in your hand. You see a knothole in the wood of the wall and, just for fun, you hold the hose, from which a powerful, narrow jet of water emerges, onto the knothole. The jet is thinner than the knothole, so it passes through unhindered. So anyone standing behind the knothole gets a thin jet of water in their face. Let's call this experiment A.

A diagram schematically shows a garden hose spraying a jet of water through a hole in a wooden wall. The jet passes through unhindered.
Experiment A: The water jet passes through the opening without touching the edge

Now you see another, slightly smaller hole to the left of it and you aim the water jet at it – experiment B. The jet also passes through, but touches the edge of the hole all around. If you stand behind the knothole, instead of a thin stream you get a very wide one, the outer part of which fans out in a funnel shape – so you get wet from top to bottom.


What has happened? The outer layer of the water jet in experiment B touched the wood on all sides, and these water molecules were slowed down and deflected, changing their direction of movement. This created the funnel shape of the water jet.

A diagram schematically shows a garden hose spraying a jet of water through a hole in a wooden wall. The jet is scattered as it passes through and loses its narrow jet shape by widening into a funnel shape.
Experiment B: The water jet touches the edge of the hole all around and is thus scattered

Now imagine that the wooden wall is a camera aperture, the knothole is the aperture opening and the water jet is a beam of light. And replace the person behind the wooden wall with a camera sensor. Because your light beam touched the edge of the hole all around in experiment B, you have caused light diffraction. The photons that make up the light were deflected outwards at the edge of the aperture, and the result is a diffraction blur; in the periphery of the beam all around, the light was scattered and deflected at a certain angle.


Of course, every expert will argue that, unlike the water molecules, the photons of the light beam were not deflected mechanically by the aperture contact, but by a light-physical phenomenon related to the fact that there are no photons behind the aperture, but that is irrelevant in this context, so I will neglect it here. The analogy with the water hose can at least provide an understanding of the process itself.


The image that such scattered ("diffracted") light beams produced on the sensor would not be a small, round, sharply defined spot, as it would have been in experiment A, but a somewhat larger spot with a blurred edge. This is known as a diffraction disk, and it can be found around all pixels at small apertures, which drastically reduces the overall impression of sharpness in the image.

A diagram schematically shows a microscope objective and a beam of light passing through an aperture onto a camera sensor.
Camera analogy to experiment A: A ray of light passes through the lens and passes unhindered through an aperture. On the camera sensor, it produces a sharply defined, round contour.

A diagram schematically shows a microscope objective and a beam of light passing through it and hitting a camera sensor through an aperture. On the camera sensor, it produces a blurred contour with a diffraction disk.
Camera analogy to experiment B: A beam of light passes through the lens and passes through a narrow aperture. Light scattering occurs as it passes through.

Camera analogy to experiment B: A beam of light passes through the lens and passes through a narrow aperture. Light scattering occurs as it passes through.

A diagram schematically shows a microscope objective and a beam of light passing through it and hitting a camera sensor through an aperture. On the camera sensor, it produces a blurred contour with a diffraction disk.


The smaller the aperture, the larger the diffraction disks around each pixel. For photography with microscope lenses, there is an additional factor: unfortunately, the smaller the structure depicted, the stronger this effect is, as the angle of deflection increases.


In order to image the structure in question anyway, we have to capture as much of this deflected light as possible, and the larger the deflection angle, the more difficult this is because this deflected light is weaker. Our lens then needs a correspondingly larger aperture angle in order to capture more of this deflected light. And this brings us to the actual topic, the numerical aperture.


What is an aperture angle?

"The numerical aperture of a lens is the sinus of half the aperture angle" - according to a dictionary. That sounds complicated, so let's simplify it a little. Physics defines it something like this: The aperture angle of a lens is the angle formed by a point on the optical axis (our focused object) with the diameter of the entrance pupil.

A diagram schematically represents a lens and shows the half aperture angle of 45 degrees with the optical axis drawn in
Objective 1 (schematic, simplified): small lenses and small working distance: half aperture angle 45°, numerical aperture 0.7; structures of 0.5 µm in size can be visualized, but the working distance is very small (1 half aperture angle, 2 focused point, 3 optical axis)

In the diagram we see a lens in side view, and in front of it a red point that we have focused. It is in the center, i.e. on the optical axis. The point is sharply focused at a certain distance from the front lens, and the aperture angle can be determined for this constellation. The decisive factor is always half the aperture angle, which consists of the optical axis and an imaginary line from the focused point to the edge of the lens, as shown in the diagram. Let us assume an angle of 45 degrees for this lens. The sinus of 45° is 0.7, so our lens has a numerical aperture of 0.7. The smallest particles it can still image measure about 0.5 micrometers (µm). But the working distance of this lens is very small.


Greater working distance

What happens when we increase the working distance? In focus stacking, we need a relatively large distance between the object and the front lens for good light guidance, whereas this is not usually necessary in general microscopy, e.g. in medicine. The vast majority of laboratory objectives for biological or medical examinations have a very small working distance, often only fractions of a millimeter. In focus stacking, however, things only become interesting at around 10 mm, and some lenses have a comfortable working distance of 35 mm.

A diagram schematically represents a lens and shows the half aperture angle of 25 degrees with the optical axis drawn in
Objective 2 (schematic, simplified): small lenses and large working distance, only half aperture angle 25°, numerical aperture 0.42; structures can only be displayed from 0.7 µm in size (1 half aperture angle, 2 focused point, 3 optical axis)

Let's give our imaginary lens a working distance of two centimetres instead of the previous few millimetres. What has changed now? The aperture angle has become dramatically smaller and is now only 25 degrees (all values are purely fictitious, but illustrate the physical relationships). The sinus value of 25° is 0.422. Our lens now has a numerical aperture of 0.42, with the result that the reproduction of the finest structures has become significantly worse. An object must now be at least 0.7 µm in size in order to be displayed by the lens.


In order to keep the aperture angle and thus the numerical aperture (and the detailed reproduction) at the high level of objective 1, we would have had to increase the diameter of the lenses accordingly in addition to the working distance. So let's just give it a try.

A diagram schematically depicts a lens and shows half the aperture angle of 45 degrees with the optical axis drawn in, but reddish coloration can be seen in the edge area of the larger front lens here
Objective 3 (schematic, simplified): large lenses and large working distance, half aperture angle 45°, numerical aperture 0.7; structures can be displayed from 0.5 µm in size, but imaging errors can be greater (1 half aperture angle, 2 focused point, 3 optical axis)

In addition to the large working distance, we have now also given our objective number 3 larger lenses. The half aperture angle is now 45° again, the sinus value is 0.7, and with the numerical aperture of 0.7, our detail resolution is again better; we can again recognize particles with a size of 0.5 micrometres (µm).


Chromatic aberrations

We now have a lens with a large working distance and a large aperture angle, i.e. a high numerical aperture and therefore good resolution. However, a new problem now arises: aberrations of a lens always add up radially from the center towards the edge of the lens. In other words: Whatever aberrations the lenses produce, e.g. chromatic aberrations (longitudinal or transverse chromatic aberrations that produce color edges), distortions or loss of sharpness, will be small to unnoticeable in the center and strong at the edge of the lens. The larger the lens diameter, the greater the inherent aberrations in the peripheral area can be. In the diagram of lens 3, this is symbolized by an increasing red coloration.


One possible solution to the problem would be to use a particularly small camera sensor. For comparison, the following diagram shows two sensors of different sizes, making it clear that the smaller one does not show the problematic image components. This is exactly how it is usually done in laboratory microscopy, e.g. in biology or medicine. When looking into the eyepieces, we see a circular image. What happens further in the periphery in the non-visible part of the objective image is of no interest to anyone here, and for laboratory microscope cameras the objective manufacturers often only approve very small sensors (e.g. Nikon 20x or 50x CFI TU Plan Epi ELWD: 2/3-inch sensor). Such a sensor is downright tiny and its corners barely reach the edge of the small, round eyepiece image, being far away from the image circle. It will therefore not picture chromatical aberrations in the non-visible edge area.

Graphic illustration: A beam of light leaves a microscope objective whose lenses cause chromatical aberrations in the peripheral area, shown here in red. However, the APS camera sensor is so small that it does not image the errors.
The larger the diameter of a lens in the microscope objective, the greater the risk of aberrations becoming visible in the peripheral area. A small camera sensor such as APS or MFT images a smaller part of the entire image circle than a 135 mm full frame sensor, so that aberrations in the peripheral area may not be imaged (simplified illustration).

Graphic illustration: A beam of light leaves a microscope objective whose lenses cause aberrations at the edges, shown here in red. The 135 mm full format camera sensor is large enough to image the errors.
The larger the camera sensor, with all other factors remaining the same, the more pronounced marginal aberrations will appear in the finished photo. Larger sensors therefore require lenses with better correction of aberrations (simplified illustration).

Problems with full-frame sensors

It is true that this lens could have an image circle large enough to cover a 135 mm full frame sensor. However, we would probably notice severe limitations in the entire peripheral area, e.g. in terms of sharpness or chromatic correction, so we would see blurring and probably also color edges. With a significantly smaller sensor, e.g. APS, things could be considerably better. The extent to which the smaller aberrations at the edge of the image would still be tolerable here would then depend on personal expectations.


A little trick ...

In order to be able to work with such a lens that produces color errors, blurring or other imaging errors in the peripheral area, but has a good image in the central area, I have come up with a little trick that can be used with some infinity microscope optics. You decrease the extension (distance between the light exit lens of the objective and the camera sensor) slightly in order to reduce the image scale. With a 20x lens, for example, this can, for example, result in a scale of 17x or less. Many infinity lenses that are used with a tube lens are very flexible here (more on this occasionally in a separate article because this has an interesting potential).


When you work in this way, you include a part of the subject's surroundings that you do not actually want to depict in the image, and it is precisely these unwanted structures that should then be in the problematic edge zone. The finished image is then scaled up and enlarged accordingly in order to crop and thus remove the unwanted outer area. In this way, you have removed the expected imaging errors, but have still captured your desired motive in full. In this way, you will lose some of the fine detail because you have enlarged some percent by upscaling, but ultimately you will have captured exactly the area you wanted, ideally without the aberrations.

Graphical drawing: A beam of light leaves a microscope objective whose lenses produce aberrations at the edges, shown here in red.
Infinity microscope lens that produces peripheral aberrations on the 135 mm full frame sensor (tube lens not shown here, graphical representation schematically simplified). This results in chromatic aberrations at the edge of the image.

Graphic drawing, but with a smaller focus distance, larger shooting distance and smaller image scale, so that the defective edge zone in the image can be cropped.
In this case, the image is taken at a smaller magnification and the object is reproduced at a smaller scale due to the smaller extension and greater shooting distance, so that this motive does not extend into the defective edge zone, which is then cropped.

Another solution to the problem would be to improve the correction of the lenses in order to reduce these imaging errors. An apochromatic correction, for example, would be one way of avoiding the color errors. But of course this costs money, because it requires special, very expensive lenses. But in this way, the lens could also be used with a 135 mm full frame sensor.


Here it becomes clear that literally no optical property of the lens can be changed without affecting other properties. In principle, this is similar to designing a Formula 1 racing car: it is relatively easy to design a fast racing car, as Adrian Newey has already stated. It is also quite easy to design a reliable F1 car. But each of these two characteristics comes at the expense of the other. Make the car faster and you pay for it by losing reliability, make it more reliable and you pay for by losing speed. And to build a fast F1 car that is also extremely reliable, you need a truly brilliant designer.


It is the same in lens design. Whatever you change influences other properties. And what we need for focus stacking – high detail reproduction thanks to a high n.A. value at a large working distance and at the same time outstanding absence of any aberrations – are virtually mutually exclusive. Such an objective can only be constructed with extreme effort.


Metallurgical microscope objectives

Coincidentally, we find very similar requirements in metallurgical microscopy as in focus stacking with microscope objectives, because it is usually not possible to work with such extremely small working distances as in the medical laboratory, and the objectives are also used in incident light, in contrast to the transmitted light use of most laboratory microscopes. This is why metallurgical microscope objectives such as Mitutoyo M Plan Apo or some HLB Planapos with their huge front lenses and enormous working distance are so suitable for our focus stacking, but their huge lenses absolutely require excellent apochromatic correction, because otherwise color aberrations occur in the peripheral area. And the latter is particularly important if we want to work with a large sensor, because we then also use the outer lens areas, which can produce an increased level of aberrations.

Four metallurgical microscope objectives standing next to each other on a white background
Many metallurgical microscope objectives achieve exactly what we need in focus stacking to a high degree, far more than most objectives specially designed for laboratory microscopes (medicine, biology, etc.). However, they require large lens diameters in order to have a high numerical aperture despite a large working distance and consequently also good correction of imaging aberrations.

Conclusion

If the detail resolution and thus the numerical aperture were of secondary importance, an objective for focus stacking could be produced for relatively little money with small lenses, and the working distance could still be large. Such lenses do exist (see diagram lens 2), but they cannot image very fine details because they have a small numerical aperture and consequently a low resolution. Focus stacking is not much fun with them.


If we didn't care about aberrations in the peripheral area, then a lens with large lenses and a correspondingly high numerical aperture, good detail resolution and a long working distance would be possible. Such lenses also exist (see diagram lens 3), but they are only useful for us with a small camera sensor.


If, on the other hand, a large working distance were unnecessary, our lens could have a large aperture angle even with small lenses, i.e. a high NA value and good detail resolution. And because of the small lens diameters, we would probably have hardly any color aberrations even without expensive apochromatic correction, at least theoretically. Such achromats are available in abundance (see diagram lens 1), and also for little money. But with those we have extreme difficulties with light control.

The front of a metallurgical microscope objective can be seen in close-up; the inscription indicating the numerical aperture can be seen above the front lens
Sufficiently large lenses are required for large camera sensors and a large working distance at the same time, because only in this way can a large numerical aperture be achieved, and this requires good correction of possible imaging aberrations

So what exactly is the numerical aperture? To put it simply, it is the design-related light entry aperture. Similar to the variable iris diaphragm of a normal camera lens, the passage of light through a microscope lens is limited by a circular aperture. This is why the term "aperture" is used for both. In principle, the numerical aperture of a microscope lens is therefore a number that tells us a physical quantity, namely the amount of light entering the lens.


The ideal would be a complete, unlimited entry of light into the front lens, i.e. a lens that allows all the light to enter. It would have a gigantic fine image, because there would be no diffraction disks around the individual pixels. Such a lens would need an incidence angle of 180 degrees, and it would have an n.A. value of 1, because that is the sinus of half the aperture angle (90°). And the refractive index of air is 1, or, if you want to know more precisely, 1.00028.


However, such a lens is technically impossible to realize. This is why the n.A. value of microscope objectives that work in air is always below 1. And the only way to get above 1 is to replace the air with a liquid medium that itself has a higher refractive index. Water, for example, is 1.33, glycerine is 1.47 and microscopic immersion oil is 1.51.


167 Ansichten0 Kommentare

Aktuelle Beiträge

Alle ansehen

Commentaires


bottom of page