Microscopy is any technique for producing visible images of structures or details too small to otherwise be seen by the human eye, using a microscope or other magnification tool. It is often used more specifically as a technique of using a microscope.
Optical and electron microscopy involves the diffraction, reflection, or refraction of radiation incident upon the subject of study, and the subsequent collection of this scattered radiation in order to build up an image. This process may be carried out by wide field irradiation of the sample (for example standard light microscopy and transmission electron microscopy) or by scanning of a fine beam over the sample (for example confocal microscopy and scanning electron microscopy. Scanning probe microscopy involves the interaction of a scanning probe with the surface or object of interest.
Optical (or light) microscopy involves passing visible light transmitted through or reflected from the subject through a single lens see the Brian J,. Ford page on the simple microscope or a series of lenses. The image can be detected directly by the eye, imaged on a photographic plate or captured digitally. The single lens with its attachments, or the system of lenses and imaging equipment, along with the appropriate lighting equipment, sample stage and support, makes up the light microscope.
Limitations of standard optical microscopy (bright field microscopy) lie in three areas;
Live cells in particular generally lack sufficient contrast to be studied successfully, internal structures of the cell are colourless and transparent. The most common way to increase contrast is to stain the different structures with selective dyes, but this involves killing and fixing the sample. Staining may also introduce artifacts, apparent structural details that are caused by the processing of the specimen and are thus not a legitimate feature of the specimen.
These limitations have, to some extent, all been overcome by specific microscopy techniques which can non-invasively increase the contrast of the image. In general, these techniques make use of differences in the refractive index of cell structures. It is comparable to looking through a glass window: you (bright field microscopy) don't see the glass but merely the dirt on the glass. There is however a difference as glass is a more dense material, and this creates a difference in phase of the light passing through. The human eye is not sensitive to this difference in phase but clever optical solutions have been thought out to change this difference in phase into a difference in amplitude (light intensity).
(For other examples of techniques to circumvent the limits to optical microscopy, see the "Super-resolution" section below.)
Bright field microscopy is the simplest of all the light microscopy techniques. Sample illumination is via transmitted white light, ie. illuminated from below and observed from above.
Simple enhancements to this technique may involve:
This uses sideways (oblique) illumination; either by covering part of the light source to give asymmetric lighting, or even an external light source being shone sideways in the sample. This gives the image a 3D appearance and can highlight otherwise invisible features. A more recent technique based on this method is Hoffmann's modulation contrast. This system is most often found on inverted microscopes for use in cell culture.
Dark field microscopy uses a carefully aligned light source to minimise the quantity of directly transmitted light (ie. unscattered light) entering the image, and only collected light scattered by the sample. This is done by confining the illumination to a ring of light.
Rheinberg illumination is a special variant of dark field illumination and is named after its inventor, Julius Rheinberg. In this variant transparent colored filters are inserted just before the condenser so that light rays at high aperture are differently colored than those at low aperture. E.g. the background to the specimen may be blue whilst the object appears self-luminous yellow. Other color combinations are possible but their effectiveness is quite variable.
More sophisticated techniques will show differences in optical density in proportion. Phase contrast is a widely used technique that shows differences in refractive index as difference in contrast. It was developed by the Dutch physicist Frits Zernike in the 1930s (for which he was awarded the Nobel Prize in 1953). The nucleus in a cell for example will show up darkly against the surrounding cytoplasm. Contrast is excellent; however it is not for use with thick objects. Frequently, a halo is formed even around small objects, which obscures detail. The system consists of a circular annulus in the condenser which produces a cone of light. This cone is superimposed on a similar sized ring within the phase-objective. Every objective has a different size ring, so for every objective another condenser setting has to be chosen. The ring in the objective has special optical properties: it first of all reduces the direct light in intensity, but more importantly, it creates an artificial phase difference of about a quarter wavelength. As the physical properties of this direct light have changed, interference with the diffracted light occurs, resulting in the phase contrast image.
Superior and much more expensive is the use of interference contrast. Differences in optical density will show up as differences in relief. A nucleus within a cell will actually show up as a globule in the most often used differential interference contrast system according to Georges Nomarski. However, it has to be kept in mind that this is an optical effect, and the relief does not necessarily resemble the true shape! Contrast is very good and the condenser aperture can be used fully open, thereby reducing the depth of field and maximizing resolution. The system consists of a special prism (Nomarski prism, Wollaston prism) in the condenser that splits light in an ordinary and an extraordinary beam. The spatial difference between the two beams is minimal (less than the maximum resolution of the objective). After passage through the specimen, the beams are reunited by a similar prism in the objective. In a homogeneous specimen, there is no difference between the two beams, and no contrast is being generated. However, near a refractive boundary (say a nucleus within the cytoplasm), the difference between the ordinary and the extraordinary beam will generate a relief in the image. Differential interference contrast uses polarized light to work properly. Two polarizing filters have to be fitted in the light path, one below the condenser (the polarizer), and the other above the objective (the analyzer).
When certain compounds are illuminated with high energy light, they then emit light of a different, lower frequency. This effect is known as Fluorescence. Often specimens show their own characteristic autofluorescence image, based on their chemical makeup.
This method is of critical importance in the modern life sciences, as it can be extremely sensitive, allowing the detection of single molecules.. Many different fluorescent dyes can be used to stain different structures or chemical compounds. One particularly powerful method is the combination of antibodies coupled to a fluorochrome as in immunostaining. Examples of commonly used fluorochromes are fluorescein or rhodamine. The antibodies can be made tailored specifically for a chemical compound. For example, one strategy often in use is the artificial production of proteins, based on the genetic code (DNA). These proteins can then be used to immunize rabbits, which then form antibodies which bind to the protein. The antibodies are then coupled chemically to a fluorochrome and then used to trace the proteins in the cells under study.
In recent work, highly efficient fluorescent proteins such as the green fluorescent protein (GFP) have been specifically fused on a DNA level to the protein of interest. This combined fluorescent protein is not toxic and hardly ever impedes the original task of the protein under study. Genetically modified cells or organisms directly express the fluorescently tagged proteins, which enables the study of the function of the original protein in vivo.
Since fluorescence emission differs in wavelength (color) from the excitation light, a fluorescent image ideally only shows the structure of interest that was labelled with the fluorescent dye. This high specificity led to the widespread use of fluorescence light microscopy in biomedical research. Different fluorescent dyes can be used to stain different biological structures, which can then be detected simultaneously, while still being specific due to the individual color of the dye.
To block the excitation light from reaching the observed or the detector, filter sets of high quality are needed. These typically consist of an excitation filter selecting the range of excitation wavelengths, a dichroic mirror, and an emission filter blocking the excitation light. Most fluorescence microscopes are operated in the Epi-illumination mode (illumination and detection from one side of the sample) to further decrease the amount of excitation light entering the detector.
Generates the image by a completely different way than the normal visual bright field microscope. It gives slightly higher resolution, but most importantly it provides optical sectioning without disturbing out-of-focus light degrading the image. Therefore it provides sharper images of 3D objects. This is often used in conjunction with fluorescence microscopy.
Fluorescence microscopy is extremely powerful due to its ability to show specifically labelled structures within a complex environment but also because of its inherent ability to provide three dimensional information of biological structures. Unfortunately this information is blurred by the fact, that upon illumination all fluorescently labeled structures emit light no matter if they are in focus or not. This means, that an image of a certain structure is always blurred by the contribution of light from structures which are out of focus. This phenomenon becomes apparent as a loss of contrast especially when using objectives with a high resolving power, typically oil immersion objectives with a high numerical aperture.
Fortunately though, this phenomenon is not caused by random processes such as light scattering but can be relatively well defined by the optical properties of the image formation in the microscope imaging system. If one considers a small fluorescent light source (essentially a bright spot), light coming from this spot spreads out the further out of focus one is. Under ideal conditions this produces a sort of "hourglass" shape of this point source in the third (axial) dimension. This shape is called the point spread function ("PSF") of the microscope imaging system. Since any fluorescence image is made up of a large number of such small fluorescent light sources the image is said to be "convolved by the point spread function".
Knowing this point spread function means, that it is possible to reverse this process to a certain extent by computer based methods commonly known as deconvolution. There are various algorithms available for 2D or 3D Deconvolution. They can be roughly classified in non restorative and restorative methods. While the non restorative methods can improve contrast by removing out of focus light from focal planes, only the restorative methods can actually reassign light to it proper place of origin. This can be an advantage over other types of 3D microscopy such as confocal microscopy, because light is not thrown away but reused. For 3D deconvolution one typically provides a series of images derived from different focal planes (called a Z-stack) plus the knowledge of the PSF which can be either derived experimentally or theoretically from knowing all contributing parameters of the microscope.
An introduction into deconvolution for microscopy can be found here: A working persons guide to deconvolution
It is well known that there is a spatial limit to which light can focus: approximately half of the wavelength of the light you are using. But this is not a true barrier, because this diffraction limit is only true in the far-field and localization precision can be increased with many photons and careful analysis (although two objects still cannot be resolved); and like the sound barrier, the diffraction barrier is breakable. This section explores some approaches to imaging objects smaller than ~250 nm. Most of the following information was gathered (with permission) from a chemistry blog's review of sub-diffraction microscopy techniques Part I and Part II. For a review, see also reference .
Probably the most conceptual way to break the diffraction barrier is to use a light source and/or a detector that is itself nanometer in scale. Diffraction as we know it is truly a far-field effect: the light from an aperture is the Fourier transform of the aperture in the far-field. Here's a neat Applet to play with. But in the near-field, all of this is not necessarily the case. Near-field scanning optical microscopy (NSOM) forces light through the tiny tip of a pulled fiber—and the aperture can be on the order of tens of nanometers. When the tip is brought to nanometers away from a molecule, the resolution is not limited by diffraction but by the size of the tip aperture (because only that one molecule will see the light coming out of the tip). An image can be built by a raster scan of the tip over the surface to create an image.
The main down-side to NSOM is the limited number of photons you can force out a tiny tip, and the minuscule collection efficiency (if you are trying to collect fluorescence in the near-field). Other techniques such as ANSOM (see below) try to avoid this drawback.
Instead of forcing photons down a tiny tip, some techniques create a local bright spot in an otherwise diffraction-limited spot. ANSOM is apertureless NSOM: it uses a tip very close to a fluorophore to enhance the local electric field the fluorophore sees. Basically, the ANSOM tip is like a lightning rod which creates a hot spot of light.
The Moerner lab at Stanford University uses some bowtie nanoantennas to greatly and reproducibly enhance the electric field in the nanometer gap between the tips two gold triangles. Again, the point is to enhance a very small region of a diffraction-limited spot, thus improving the mismatch between light and nanoscale objects—and breaking the diffraction barrier.
A recent favorite is STED—stimulated emission depletion. Stefan Hell at the Max Planck Institute developed this method, which uses two laser pulses. The first pulse is a diffraction-limited spot that is tuned to the absorption wavelength, so excites any fluorophores in that region; an immediate second pulse is red-shifted to the emission wavelength and stimulates emission back to the ground state before, thus depeting the excited state of any fluorophores in this depletion pulse. The trick is that the depletion pulse goes through a phase modulator that makes the pulse illuminate the sample in the shape of a donut, so the outer part of the diffraction limited spot is depleted and the small center can still fluoresce. By saturating the depletion pulse, the center of the donut gets smaller and smaller until they can get resolution of tens of nanometers.
The methods above (and below) use experimental techniques to circumvent the diffraction barrier, but one can also use crafty analysis to increase the ability to know where a nanoscale object is located. The image of a point source on a charge-coupled device camera is called a point-spread function (PSF), which is limited by diffraction to be no less than approximately half the wavelength of the light. But it is possible to simply fit that PSF with a Gaussian to locate the center of the PSF—and thus the location of the fluorophore. The precision by which this technique can locate the center depends on the number of photons collected (as well as the CCD pixel size and other factors). Regardless, groups like the Selvin lab and many others have employed this analysis to localize single fluorophores to a few nanometers. This, of course, requires careful measurements and collecting many photons.
What fitting a PSF is to localization, photo-activated localization microscopy (PALM) is to "resolution"—this term is here used loosely to mean measuring the distance between objects, not true optical resolution. Eric Betzig and colleagues developed PALM; Xiaowei Zhuang at Harvard used a similar techniques and calls it STORM: stochastic optical reconstruction microscopy. The basic premise of both techniques is to fill the imaging area with many dark fluorophores that can be photoactivated into a fluorescing state by a flash of light. Because photoactivation is stochastic, only a few, well separated molecules "turn on." Then Gaussians are fit to their PSFs to high precision (see section above). After the few bright dots photobleach, another flash of the photoactivating light activates random fluorophores again and the PSFs are fit of these different well spaced objects. This process is repeated many times, building up an image molecule-by-molecule; and because the molecules were localized at different times, the "resolution" of the final image can be much higher than that limited by diffraction.
The major problem with these techniques is that to get these beautiful pictures, it takes on the order of hours to collect the data. This is certainly not the technique to study dynamics (fitting the PSF is better for that).
There is also the wide-field structured-illumination (SI) approach to breaking the diffraction limit of light. SI—or patterned illumination—relies on both specific microscopy protocols and extensive software analysis post-exposure. But, because SI is a wide-field technique, it is usually able to capture images at a higher rate than confocal-based schemes like STED. (This is only a generalization, because SI isn't actually super fast. I'm sure someone could make STED fast and SI slow!) The main concept of SI is to illuminate a sample with patterned light and increase the resolution by measuring the fringes in the Moiré pattern (from the interference of the illumination pattern and the sample). "Otherwise-unobservable sample information can be deduced from the fringes and computationally restored."
SI enhances spatial resolution by collecting information from frequency space outside the observable region. This process is done in reciprocal space: the Fourier transform (FT) of an SI image contains superimposed additional information from different areas of reciprocal space; with several frames with the illumination shifted by some phase, it is possible to computationally separate and reconstruct the FT image, which has much more resolution information. The reverse FT returns the reconstructed image to a super-resolution image.
But this only enhances the resolution by a factor of 2 (because the SI pattern cannot be focused to anything smaller than half the wavelength of the excitation light). To further increase the resolution, you can introduce nonlinearities, which show up as higher-order harmonics in the FT. In reference , Gustafsson uses saturation of the fluorescent sample as the nonlinear effect. A sinusoidal saturating excitation beam produces the distorted fluorescence intensity pattern in the emission. This nonpolynomial nonlinearity yields a series of higher-order harmonics in the FT.
Each higher-order harmonic in the FT allows another set of images that can be used to reconstruct a larger area in reciprocal space, and thus a higher resolution. In this case, Gustafsson achieves less than 50-nm resolving power, more than five times that of the microscope in its normal configuration.
The main problems with SI are that, in this incarnation, saturating excitation powers cause more photodamage and lower fluorophore photostability, and sample drift must be kept to below the resolving distance. The former limitation might be solved by using a different nonlinearity (such as stimulated emission depletion or reversible photoactivation, both of which are used in other sub-diffraction imaging schemes); the latter limits live-cell imaging and may require faster frame rates or the use of some fiducial markers for drift subtraction. Nevertheless, SI is certainly a strong contender for further application in the field of super-resolution microscopy.
Most modern instruments provide simple solutions for micro-photography and image recording electronically. However such capabilities are not always present and the more experienced microscopist will, in many cases, still prefer a hand drawn image rather than a photograph. This is because a microscopist with knowledge of the subject can accurately convert a three dimensional image into a precise two dimensional drawing . In a photograph or other image capture system however, only one thin plane is ever in good focus.
The creation of careful and accurate micrographs requires a microscopical technique using a monocular eyepiece. It is essential that both eyes are open and that the eye that is not observing down the microscope is instead concentrated on a sheet of paper on the bench besides the microscope. With practice, and without moving the head or eyes, it is possible to accurately record the observed details by tracing round the observed shapes by simultaneously "seeing" the pencil point in the microscopical image.
Practising this technique also establishes good general microscopical technique. It is always less tiring to observe with the microscope focussed so that the image is seen at infinity and with both eyes open at all times.
As resolution depends on the wavelength of the light. Electron microscopy has been developed since the 1930s that use electron beams instead of light. Because of the much lower wavelength of the electron beam, resolution is far higher.
Though less common, X-ray microscopy has also been developed since the late 1940s. The resolution of X-ray microscopy lies between that of light microscopy and the electron microscopy.
For light microscopy the wavelength of the light limits the resolution to around 0.2 micrometers. In order to gain higher resolution, the use of an electron beam with a far smaller wavelength is used in electron microscopes.
or helium scanning microscope is suggeseted for the scanning imaging sustem with neutral He atoms ad prope particles . Such a device could provide the resolution at nanometer scale and be absolutely non0destructive, but it is not developed so well as optical micrroscope or an electron microscope.
Examples of scanning probe microscopes are the atomic force microscope, the Scanning tunneling microscope and the photonic force microscope. All such method imply the solid-state probe tip in vicinity (near field) of an object, which is supposed to be almost flat.
Ultrasonic Force Microscopy (UFM) has been developed in order to improve the details and image contrast on "flat" areas of interest where the AFM images are limited in contrast. The combination of AFM-UFM allows a near field acoustic microscopic image to be generated. The AFM tip is used to detect the ultrasonic waves and overcomes the limitation of wavelength that occurs in acoustic microscopy. By using the elastic changes under the AFM tip, an image of much greater detail than the AFM topography can be generated.
Ultrasonic force microscopy allows the local mapping of elasticity in atomic force microscopy by the application of ultrasonic vibration to the cantilever or sample. In an attempt to analyse the results of ultrasonic force microscopy in a quantitative fashion, a force-distance curve measurement is done with ultrasonic vibration applied to the cantilever base, and the results are compared with a model of the cantilever dynamics and tip-sample interaction based on the finite-difference technique.
Amateur Microscopy is the investigation and observation of biological and non-biological specimens for recreational purposes using an optical microscope (light microscopes). Collectors of minerals, insects, seashells and plants may use microscopes as tools to uncover features that help them classify their collected items. Other amateurs may be interested in observing the life found in pond water and of other samples. Microscopes may also prove useful for the water quality assessment for people that keep a home aquarium. Photographic documentation and drawing of the microscopic images are additional tasks that augment the spectrum of tasks of the amateur. There are even competitions for photomicrograph art. Participants of this past time may either use commercially prepared microscopic slides or may engage in the task of specimen preparation.
While microscopy is a central tool in the documentation of biological specimens, it is rarely sufficient to justify the discovery of a new species based on microscopic investigations alone. Often genetic and biochemical tests are necessary to confirm the discovery of a new species. A fully equipped laboratory may be necessary, something often not available to amateurs. For this reason it may be unlikely that amateur microscopists are capable of substantiating their find to the extent to yield a scientific publication.
The content of this section is licensed under the GNU Free Documentation License (local copy). It uses material from the Wikipedia article "Microscopy" modified August 9, 2007 with previous authors listed in its history.