Micros
Micros
• Superposition principle: When two or more waves travel through the same medium at the
same time, they pass through each other without being disturbed. The displacement at any
point is simply the sum of the displacements of the individual waves
• Interference: Occurs when two or more waves overlap and combine to form a new wave
pattern. This can occur with any type of wave, including light, sound, and water waves.
o Constructive: If the crests and troughs of two waves align, they add up to create a
wave with larger amplitude. This results in a brighter or louder signal.
o Destructive: If the crest of one wave coincides with the trough of another, they
cancel each other out, resulting in a wave with a smaller amplitude or even
complete cancellation. This leads to a dimmer or quieter signal (noise -cancelling
headphones)
Polarised light. This wave is made up of light waves whose electric vibrations all occur in a single plane. That is different
from most light, in which the vibrations occur in multiple planes.
Types: Linear (vibrate in one plane), circular (rotate in a circular motion as they travel), elliptical (follow an elliptical path).
Optically active molecules rotate linearly polarised light
• Production polarised light: Birefringence (anisotropic refractive index, certain materials split light into two rays
with perpendicular polarisations), polarising filters (absorption, allow only light vibrating in a specific plane to pass
through), reflection (light reflecting off surfaces like water or glass can become polarised)
• Interference of polarised waves: When two polarised light waves intersect, their final polarisation depends on their
relative phase and amplitude. This can result in various interference patterns, enhancing or diminishing the light
intensity.
1.2. Ray optics
Reflection occurs when light bounces off a surface. Specular (planar surface at a definite angle, no wavelength dependent)
or diffuse (a rough surface in many directions).
Refraction is the bending of light as it passes from one medium to another with a different density. This change in speed
causes the light to change direction.
Snell’s law describes how light bends when entering a different medium: 𝑛1 sin 𝜃1 = 𝑛2 sin 𝜃2 . (n: refractive index, 𝜃:
incidence/refraction angles)
Thin lenses: can be either converging (convex) or diverging (concave). The thin lens eq. relates the object distance (𝑑𝑜 ),
1 1 1
the image distance (𝑑𝑖 ) and the focal length of the lens (𝑓): 𝑑𝑜
+𝑑 =𝑓
𝑖
Setup for when collimated light (parallel rays) is needed, like in telescopes:
Magnification: How much larger or smaller the images is compares to the object. Ratio of the height of the image (ℎ𝑖 ) to
ℎ 𝑑
the height of the object (ℎ𝑜 ), and it can also be related to the distances: 𝑀 = ℎ 𝑖 = − 𝑑 𝑖
𝑜 𝑜
Numerical Aperture (NA) measures the ability of lens to gather light and resolve fine specimen detail at a fixed object
distance: NA = 𝑛sin(𝜃)
• Higher NA → Higher resolution and lens can gather lighter, making the image brighter.
• A medium with a higher refractive index increases the NA. → 𝑛:Air 1.00, water 1.33, immersion oil 1.51.
Optical Path Length (OPL) describes distance that the light travels in a medium. It is used for calculating interference and
diffraction patterns. Related with the num. of periods a wave experience when travelling between two points. A higher OPL
means a greater phase shift of light as it travels through different materials. OPL = 𝑛 ⋅ 𝑑
• Chromatic: Different wavelengths (colours) of light are focused at different points. This results in colour fringes
around the edges of objects.
• Spherical Aberration: Light rays that pass through the edges of a lens are focused at different points than those
passing through the centre. This causes a blurred image.
• Coma: Off-axis points of light appear as comet-shaped blobs rather than points. This is especially noticeable in
wide-field imaging.
• Astigmatism: Light rays in different planes (horizontal and vertical) are focused at different points. This results in
images that are sharp in one direction but blurred in the perpendicular direction.
• Field Curvature: A flat object is imaged onto a curved surface. This means that the edges of the image are out of
focus when the centre is in focus, and vice versa.
Correcting aberrations
Eye Light enters the eye through the cornea and pass through the pupil (adjustable opening). The lens inside the eye then
changes shape to focus the incoming light onto the retina (accommodation). The retina contains photoreceptors (rods and
cones) to detect and convert light into electrical signals that are sent to the brain via the optic nerve.
The human eye can detect differences in light intensity and colour (resolution of about 0.1 mm).
The retina contains pigments like rhodopsin in rods (for B&W vision in low light) and photopsins in cones (for vision and
work best in bright light).
Compound microscope
• Eyepiece (Ocular Lens): The lens you look through, which further magnifies the
image created by the objective lens. The eyepiece usually has a magnification of
10x.
• Objective lens: This is the primary lens that magnifies the specimen. Compound
microscopes typically have multiple objective lenses with different magnifications
(e.g., 4x, 10x, 40x, 100x).
• Stage: The platform where the specimen slide is placed. It often has clips to hold
the slide in place.
• Condenser: Focuses the light from the light source onto the specimen.
• Diaphragm: Controls the amount of light reaching the specimen, improving contrast
and resolution.
• Light source: Provides illumination to the specimen. It can be a built-in light bulb
or a mirror that reflects external light.
• Focus Knobs: Used to adjust the focus of the microscope. There are usually two
knobs: coarse focus for general focusing and fine focus for precise adjustments.
Magnification: The total magnification of the microscope is the product of the magnifications of the eyepiece and the
objective lens. 𝑀 = 𝑀𝑜𝑐 × 𝑀𝑜𝑏𝑗
The objective lens creates a real, inverted image of the specimen. This image is then magnified by the eyepiece to produce
a virtual image that you see when looking through the microscope.
In infinity optical systems, the objective lens projects the image to infinity instead of some finite image distance. To form
a real image, an additional lens called the tube lens is used, which focuses the parallel light rays from the objective lens to
form an intermediate image at a specific distance. This design can reduce optical aberrations to improve image quality.
Infinity space can accommodate additional components that can be easily added or removed (flexibility, versatility).
Illumination
Critical illumination work by positioning the light source in such a way that its image is projected onto the specimen through
the condenser lens. This method is straightforward and provides bright illumination. However, any imperfections or
inhomogeneities in the light source are also focused onto the specimen, which can reduce image quality.
Köhler illumination is a more advanced technique that ensures even and bright illumination across the entire field of view.
In this setup, the light source is focused onto the condenser diaphragm, and then focuses collimated (parallel) light onto the
specimen. So, it improves contrast and avoids projecting imperfections from the light source onto the specimen, resulting
in higher image quality.
The Rayleigh criterion defines the limit of resolution. According to this criterion, two points are considered
resolvable if the centre of one Airy disk coincides with the first minimum of the other. The minimum resolvable
𝜆
distance is: 𝑑 = 1.22 2NA.
An object can be regarded as a collection of point sources. The image in the microscope is built up out of overlapping Airy
patterns.
Abbe’s diffraction limit indicates that the smallest detail that can be resolved is approximately half the wavelength of the
light used. We will not be able to resolve structural details <200 nm with light microscopy
Trade-off resolution and contrast:
• Smaller Airy disks, smaller NA → higher resolution but reduces contrast because it increases the intensity of
undiffracted light, which forms the background signal.
• Adjusting the condenser diaphragm:
o Opening the aperture diaphragm brightness and resolution ↑ contrast ↓
o Closing the aperture diaphragm brightness and resolution ↓ contrast ↑
• Proper alignment of the light source, condenser, and diaphragms. Misalignment can lead to uneven illumination
and reduced image quality.
• Due to the importance of diffraction and interference in image formation, optimal coherence is desirable for the
light that illuminates the sample. Coherent light consists of waves with a fixed/preserved phase relation over a
certain distance (coherence length)
Role of the condenser is to focus light from the microscope’s light source onto the specimen so that the light is directed
onto the area of interest, providing uniform illumination. This way, the condenser enhances the resolution and contrast of
the image. Properly adjusted, it allows the microscope to achieve its maximum NA.
Dark field enhances contrast in unstained specimens by using a dark field condenser that creates a hollow cone of light and
blocks direct light, allowing only scattered light to reach the objective lens. The specimen appears bright against a dark
background. For observing live, bacteria, algae, and small aquatic organisms. Useful for detecting small particles and fine
structures.
Phase contrast converts phase shifts in light passing through a transparent specimen (due to variations in thickness and
refractive index) into changes in brightness. A phase plate in the objective lens shifts the phase of the background light by
a quarter wavelength. Interference between the background light and the light diffracted by the specimen creates contrast.
There is no need of staining and is used for observing live cells, organelles, and motility.
Polarisation uses polarised light to interact with specimens that are birefringent (having different refractive indices in the
different directions), altering its polarisation state. Only the light with specific polarisation is allowed to pass trough an
analyser, which is placed after the specimen. Contrast is created based on the birefringent properties of the specimen. To
study crystalline structures, minerals, and biological specimens like muscle fibres and starch grains.
• Polariser: Filter placed below the specimen stage, only lets waves vibrating in a single direction pass through
• Analyser: Often a 2nd polariser placed above specimen, further refines the light, revealing details in anisotropic
materials (different properties in different directions.
Differential Interference Contrast (DIC) uses polarised light which is split in S and P polarisations by a Wollaston prism.
The two beams pass through the specimen at slightly different locations and are recombined by a second prism. The
differences in OPL between the two beams create interference, generating high-contrast images with 3D appearance. It is
used for observing live, unstained sample. To study cell morphology, organelles, and dynamic processes withing cells.
02_LASERS
Emission. The energy difference between initial and arrival levels sets the emitted photon’s energy
• Spontaneous: When an excited atom decays to a lower energy state by emitting a photon, with the photon’s energy
matching the energy difference between the initial and final states. This process occurs randomly and is not
influenced by external photons.
• Stimulated: An incoming photon triggers an excited atom to emit a photon with the same direction, phase, and
wavelength. This principle is fundamental to laser operation.
Absorption. When a photon with energy matching the energy difference between two atomic states is absorbed by an atom,
the atom transitions from a lower to a higher energy state.
Fluorescence. The emission of light by a substance that has absorbed light or other
electromagnetic radiation. The process begins with excitation, where a molecule
absorbs a photon and transitions from the ground state to an excited state. The
molecule then returns to the ground state by emitting a photon, which has a longer
wavelength (lower energy) than the absorbed photon. This shift in wavelength is
known as the Stokes shift. The re-emission of light occurs spontaneously and
almost immediately after excitation, typically within nanoseconds.
2.2. LASER (Light Amplification by Stimulated Emission of Radiation)
When an excited atom or molecule encounters a photon with energy matching the energy difference between its excited
state and a lower energy state, it can be stimulated to emit a second photon. The emitted photon has the same energy, phase,
and direction as the stimulating photon. This results in coherent light, where the waves are in phase and travel in the same
direction.
For light amplification to occur, there must be more atoms or molecules in the excited state than in the lower energy state.
This condition is known as population inversion. This is typically achieved through a process called pumping, which
involves supplying energy to the system. Pumping methods include optical pumping (using light), electrical discharge
(gases), injection of carriers (semiconductors), irradiation with electron beams, chemical reactions and schock waves.
Pumping: energy is supplied to the gain medium (solid, liquid,
gas), exciting atoms.
Spontaneous emission: some excited atoms spontaneously emit
photons.
Stimulated emission: These photons stimulate other excited
atoms to emit additional photons, leading to a chain reaction.
The optical cavity also experiences losses due to reflections, scattering, and absorption within the mirrors and the active
medium. These losses reduce the efficiency of the laser. For lasing to occur, the gain from the stimulated emission must
overcome these losses. The point at which this happens is known as the lasing threshold.
Level systems
• Two-Level system: Consists of a ground state and an excited state. Pumping increases the population in the excited
state but achieving population inversion is challenging because the probability of de-excitation increases with
pumping. Two-level systems generally cannot produce a laser effect due to the difficulty in maintaining population
inversion.
• Three-Level system: Involves a ground state, an intermediate state, and an excited state. Pumping excites atoms to
the highest energy state, which quickly decays to the intermediate state without emitting a photon. Population
inversion is achieved between the intermediate and ground states, allowing stimulated emission to occur.
Four-Level system: Includes an additional energy level below the ground state. Pumping excites atoms to the
highest energy state, which decays to an intermediate state, then to a lower state, and finally to the ground state.
Population inversion is more easily achieved because the lower state is not the ground state, reducing the likelihood
of reabsorption of emitted photons.w
Types of lasers are classified based on the type of active medium they use.
• Solid-state lasers: use solid materials like crystals or glasses doped with rare-earth or transition metal ions.
Examples include the ruby laser and the Nd:YAG laser, which are used in material processing, medical procedures,
and military applications.
• Gas lasers: utilise gaseous substances as their active medium. The HeNe laser and the CO₂ laser are prominent
examples. These lasers find applications in cutting and welding materials, medical surgeries, and scientific research
due to their precise and powerful beams.
• Liquid lasers: dye lasers, use organic dyes dissolved in solvents. These lasers are useful for their tunability, making
them ideal for spectroscopy, medical treatments, and various research applications. Semiconductor lasers, including
diode lasers, are ubiquitous in everyday technology. Found in laser pointers, barcode scanners, and optical disc
drives, these lasers are crucial for telecommunications, data storage, and consumer electronics
Spectral Filters
• Long-pass: Transmit wavelengths longer than a specified cut-on wavelength while blocking shorter wavelengths.
• Short-pass: Transmit wavelengths shorter than a specified cut-off wavelength while blocking longer wavelengths.
• Band-pass: Transmit a specific range of wavelengths between a cut-on and cut-off wavelength, blocking
wavelengths outside this range.
• Notch: Block a narrow range of wavelengths while transmitting wavelengths outside this range.
Dichroic filters (Beam Splitters): Transmit a range of wavelengths and reflect the remaining wavelengths. Combining or
splitting beams of different colours in fluorescence other optical systems.
Neutral density filters attenuate the intensity of light across a broad spectrum without affecting its colour. Optical density
(OD) indicates the attenuation factor provided by the filter. Higher OD values correspond to greater attenuation.
Polarising filters: Transmit light with a specific polarisation while blocking light with other polarisations. Useful for
enhancing contrast in microscopy, as well as reducing glare and reflections in photography.
Interference filters: Use the principle of interference to selectively transmit or reflect specific wavelengths. Consist of
multiple thin layers of dielectric materials with varying refractive indices. Use for wavelength selection in spectroscopy
and laser systems.
Absorptive filters: Made from coloured glass or plastic which absorb specific wavelengths of light while transmitting
others. For protection in laser safety goggles.
2.4. Detectors
Photoelectric effect: Light incident on a metal surface can eject electrons if the photon energy exceeds the work function
of the metal. The kinetic energy of the ejected electrons depends on the frequency of the incident light, not its intensity.
Basis for the operation of photomultiplier tubes and some types of photodiodes.
Choose a detector with high quantum efficiency for low-light applications. For fast measurements, detectors with short
response times. The detector needs to be sensitive enough to the specific wavelength range of interest. For high-precision
measurements, detectors with low noise characteristics are considered.
Photomultiplier tube (PMT): is a highly sensitive light detector
that amplifies the signal generated by incident photons. When
photons hit the photocathode, they eject electrons due to the
photoelectric effect. The ejected electrons are accelerated
towards the first dynode by an electric field. Upon striking the
dynode, each electron releases multiple secondary electrons very
quickly. This process repeats across multiple dynodes, with each
stage further amplifying the number of electrons. The multiplied
electrons are finally collected at the anode, producing a current
that is proportional to the initial light intensity. PTMs can detect
single photons, are highly sensitive to low light levels and have low intrinsic noise enhancing their ability to detect weak
signals. Are sensitive to magnetic fields, which can affect their performance and they require high voltage power supplies…
Photodiode: a semiconductor device that converts light into an electrical current. The core of a photodiode is a p-n junction,
which is a boundary between p-type and n-type semiconductor materials. When light photons strike the photodiode, they
are absorbed and generate electron-hole pairs in the semiconductor material. The built-in electric field at the p-n junction
separates these charge carriers, causing electrons to move towards the n-type region and holes towards the p-type region,
creating a current (A/W).
• Photovoltaic mode: the photodiode operates without an external bias. It generates a voltage when exposed to light,
like a solar cell. So, it is used in solar panels and light metres.
• Photoconductive mode: With a reverse bias, which increases the width of the depletion region and reduces the
capacitance, allowing for faster response times. It is used in optical communication systems and light detection.
• Avalanche mode: Using high reverse bias. The high electric field causes the initial photo-generated carriers to gain
enough energy to ionise atoms in the semiconductor, creating additional electron-hole pairs and leading to an
avalanche multiplication process. This mode is used in applications requiring high sensitivity, such as photon
counting and low-light detection.
The quantum efficiency is the ratio of the number of charge carriers generated to the number of incident photons. High
quantum efficiency means better sensitivity. Photodiodes in photoconductive mode have faster response times due to the
reduced capacitance and generally have low noise.
Charge-coupled device (CCD): consists of an array of light-sensitive elements called pixels. Each pixel acts as a small
capacitor that accumulates charge proportional to the amount of light that hits it. When photons strike the pixels, they
generate electron-hole pairs. The electrons are collected in the potential wells of the pixels, creating a charge proportional
to the light intensity. After exposure, the accumulated charge in each pixel is transferred sequentially to the adjacent pixel
and eventually to the output node (charge transfer). The charge packets are converted into a voltage signal at the output
node, which is then amplified and digitized to form an image. CCDs are highly sensitive and have low noise. Besides, can
capture a wide range of light intensities, from very dim to very bright, without losing detail.
Complementary Metal-Oxide-Semiconductor (CMOS): Like CCDs, CMOS detectors consist of an array of pixels. Each
pixel contains a photodiode and associated circuitry for signal amplification and readout. When light photons strike the
photodiode, they generate electron-hole pairs. The resulting charge is proportional to the light intensity. Each pixel has its
own amplifier and readout circuitry, simultaneous processing of signals is triggered from multiple pixels. This parallel
processing enables faster readout speeds compared to CCDs. CMOS detectors consume less power than CCDs, making
them suitable for battery-operated devices.
03_CONFOCAL LASER SCANNING MICROSCOPY
Confocal microscopy uses a narrow laser light source and a more complex optical path, which includes a sensitive detector
to handle the lower light intensities. This setup allows for higher Z resolution (around 1 µm) and XY resolution
(approximately 200 nm), resulting in images with reduced out-of-focus blur and clearer details of specific focal planes.
However, this technology comes at a higher cost and requires more maintenance.
Wide-field microscopy typically employs a broad-spectrum light source and a simpler optical path with a standard camera
for detection. While it offers lower Z resolution and more out-of-focus blur compared to confocal microscopy, it is less
expensive and easier to maintain. This makes it ideal for general imaging tasks where high Z resolution is not that important.
3.1. Optical Principles of Confocal Imaging
Epi-illumination: both the illumination and observed light are on the same side of the specimen. A laser beam is directed
through the objective lens, focusing light onto a specific point on the sample and exciting the fluorophores. The emitted
fluorescence is then collected by the same lens and sent back through the optical path to the detector. This setup ensures
that only light from the focal plane is detected, enhancing image resolution and clarity. By aligning the illumination and
detection paths, this method minimizes light loss, maximizes the fluorescence signal, reduces background noise, and
improves the SNR, resulting in clearer and more detailed images.
Laser beam generates monochromatic light. Various types of lasers can be used, such as Argon, Argon-Krypton, and
Helium-Neon, each offering different wavelengths. Different laser lines (wavelengths) are available, ranging from UV to
visible light (e.g., 352 nm, 488 nm, 633 nm). Lasers provide high-intensity light, which is necessary for penetrating deep
into the specimen and achieving high-resolution images.
Types of filters
• Excitation filters: For illuminating the sample with the precise light, only the specific wavelength of light needed
to excite the fluorophores to pass through.
• Emission filters: To only let the desired fluorescence signal through to the detector. This cuts down on background
noise (autofluorescence of the sample, scattering).
• Dichroic mirrors: They reflect certain wavelengths and let others pass through, separating the excitation light from
the emitted fluorescence, sending each one down the right path (dual fluorescence dye microscopy).
Fluorophores (dyes): molecules that can absorb and emit light at a longer wavelength upon excitation. Each fluorophore
has a specific excitation and emission wavelength, which determines the colour of light that emits, as well as the
fluorescence intensity (brightness).
Jablonski diagrams: show the electronic states of a molecule and the transitions between them.
Quenching: like a temporary dimming of the lights. It happens when another molecule gets too close to the fluorophore
and causes it to lose energy without emitting light. This process is reversible, so once the interfering molecule goes away,
the fluorescence can come back. A common example of this is Fluorescence Resonance Energy Transfer (FRET), where
energy is transferred between two nearby fluorophores. Super useful for studying how molecules interact and bind within
cells about their proximity and interactions.
Photobleaching: more like a permanent burnout. When a fluorophore is exposed to light for too long, it undergoes
chemical changes, often involving reactions with oxygen, and loses its ability to fluoresce forever. This is an irreversible
process. Techniques like Fluorescence Recovery After Photobleaching (FRAP) and Fluorescence Loss in Photobleaching
(FLIP) take advantage of this to study the movement and dynamics of molecules within live cells. Handy for looking at
how molecules move around and how fluid cell membranes are.
Pinhole: When the laser hits the sample, the emitted light goes through the objective lens and is focused onto the pinhole.
Only the light that lines up perfectly with the pinhole gets through the detector. In CLSM, 2 pinholes are used: one places
in front of the illumination source and the other one in front of the detector.
• Blocks out-of-focus light, better depth resolution (sharper image). Tweakable size (10 µm – 10 mm)., smaller
pinhole higher resolution but less light, low signal strength. By blocking stray light, less background noise. Often
hexagonal shape to optimise light path. NA mainly impacts lateral resolution but also affect axial (Z) resolution. In
confocal microscopy, the pinhole size is for better Z resolution though.
Galvanometer: measures small electric currents by deflecting a pointer. When current flows through a coil in a magnetic
field, it creates its own magnetic field, causing the coil to twist. This twist moves a pointer along a scale, showing the
current level.
How is it possible that the exciting beam shares the same path as the emission beam simultaneously even when the
scanning mirrors do not stop the raster scanning?
In confocal microscopy, the exciting beam and the emission beam share the same path thanks to a beam splitter. The laser
light (exciting beam) is directed down to the sample, and when the sample fluoresces, the emitted light (emission beam)
travels back up the same path. The beam splitter reflects the laser light but lets the emitted light pass through to the detector.
The scanning mirrors move the laser across the sample in a precise pattern, and everything is perfectly timed. This
synchronisation ensures that the emitted light is collected accurately, even while the laser is scanning. This setup is super-
efficient and ensures high-resolution images by keeping everything in sync.
Photomultiplier Tube (PMT) is used to detect and amplify the light signals of the sample. Converts incoming photons
(light particles) into electrons through the photoelectric effect. These electrons are then multiplied through a series of
dynodes, creating a stronger electrical signal. This amplified signal is what gets processed to form the image. Because
confocal microscopy often deals with very low light levels, the PMT is crucial for boosting the signal so you can get clear,
detailed images. It’s like turning up the volume on a whisper so you can hear it loud and clear.
3.2. Main factors affecting quality of a confocal image
Spatial resolution: Depends on pinhole and objective aperture, zoom factor (used to reduce pixel size maintaining spatial
resolution and to magnify the image for display and printing, but too much zoom can lead to oversampling, where the image
detail is artificially enhanced), and scan rate (slower scan rates can improve resolution by allowing more light to be
collected, but they also increase the time needed to capture an image).
Resolution of light intensity (dynamic range): the range of light intensities that the system can capture, from the darkest
to the brightest parts of the image. A higher dynamic range means you can see more detail in both the shadows and
highlights.
• Gain boosts the brightness by amplifying the input signal, making faint features more visible, but too much can add
noise. Offset, on the other hand, adjusts the baseline, setting the darkest parts of the image to true black, which
enhances contrast. Gain and offset are used to adjust (contrast) the detector signal (input) in a way, that a maximal
number of grey levels is included in the resulting image (output).
S/N ratio: balance between the actual signal from your sample and the background noise. To improve SNR, increase the
light intensity (but watch out for photobleaching), adjust the pinhole size (larger lets in more light but reduces resolution),
slow down the scan speed (to collect more light), and tweak the detector settings (like gain and offset).
Temporal resolution: Speed at which the microscope can take images. Usually, we don’t work with fastest scanning mode
on the microscope, because that would lead to less signal. To gather sufficient photons, a minimal time in each pixel is
necessary. Th goal is to find balance between scanning speed and maintaining a good S/N ratio.
Advantages Disadvantages
• High resolution, sharper images by focusing on a • Photobleaching: Long exposure to laser light can
single plane and cutting out the blurry stuff. fade fluorescent dyes, limiting imaging time and
• Images from different depths and build cool 3D quality.
models of your sample. • Struggles with thick samples as light scattering
• Transverse x–z or y–z cross-sectional views can and absorption mess with deeper layers.
be generated by most confocal software • Sample: Requires extensive prep like fixation
programs. and staining, which can alter the sample’s natural
• The pinhole blocks out unwanted light, making state.
your images clearer. • Limited FOV: Smaller view and slower scanning
• Works with various fluorescent probes and is speed compared to wide-field microscopy,
great for live-cell imaging to watch processes in making it tough to image larger areas.
real-time. • XY resolution limit of ≈200 nm given by the
• Pinhole/PMT: Excludes out-of-focus light, wave nature of light – super-resolution methods
improving Z-resolution. are needed to overcome this
• Raster Scanning: Replaces widefield imaging,
reconstructing the image by computer.
04_SUPER RESOLUTION
4.1. Fluorescence essentials
• Photobleaching: Degradation of the fluorophore due to oxidation from the excited state. Over time, the energy
from the light cause chemical changes in the molecule, leading to its destruction. Once damages can no longer
florescence limiting the time the sample can be observed.
• Blinking: Intermittent fluorescence. Molecules can sometimes switch between a state where they emit light (on)
and a state where they don’t emit light (off). Can help to see individual molecules clearer by separating they signals
over time.
• Stokes shift: Difference in wavelength between excitation and emission peaks (as far as possible better for
separating excitation from emission). In emission, some of the energy is lost as heat, so the emitted light has a
longer wavelength (lower energy).
• Dichroic mirrors: Dichroic mirrors ensure that only the fluorescence signal reaches the detector, preventing the
excitation light from interfering with the image. This results in higher contrast and clearer images.
Involves:
Images projected onto a detector are always limited by diffraction, but we can use:
STED (Stimulated Emission Depletion): Normally, microscopes cannot see things < 200 nm because of the diffraction
limit, but STED gets around this. A laser beam excites the fluorescent molecules in the sample, making them glow. Then a
second laser beam shaped like a donut to deplete (turn off) the fluorescence around the excited molecules, leaving just a
tiny central spot that still glows. This trick lets to see much smaller details than usual (20-50 nm). Requires complex setup
with two precisely aligned lasers and special dyes that can switch between bright and dark states. High intensity of depletion
laser can cause photobleaching and requires photo-switchable fluorescent dyes.
RESOLFT (REversible Saturable Optical Linear Fluorescence Transitions): fluorescent molecules are switched
between a bright state (where they emit light) and a dark state (where they don’t emit light). This switching is done using
lasers, and the key is that the molecules can be switched back and forth many times without getting damaged. The main
advantages is that it causes less photobleaching than STED because the switching between bright and dark states is gentler.
However, it requires special fluorophores with reversible photo-switching.
PALM (PhotoActivated Localization Microscopy) and STORM (Stochastic Optical Reconstruction Microscopy) let
see small details by pinpointing the exact locations of individual fluorescent molecules. In both methods, sample is labelled
with special fluorescent molecules that can be switched on and off. Start by turning on only a few of these molecules at a
time, so they don’t overlap. Then a picture of these glowing molecules is taken and switch them off and turn on a few more.
By repeating this process many times, a series of images are get with different molecules glowing in each one. Next, a
computer combines all these images into a single super-resolution image (~20nm). The computer calculates the exact
position of each molecule, allowing to see details much smaller than the diffraction limit. The disadvantages are that there
is need of 2 lasers (for photoactivation and for fluorescence reading) and it is time-consuming (a lot of images). The main
difference between PALM and STORM is the type of fluorescent molecules they use. PALM uses photoactivatable proteins,
while STORM uses dye pairs that can switch between states.
SIM (Structured Illumination Microscopy) works with standard widefield microscopes and doesn’t require special
fluorophores. It enhances resolution (100 nm) by using patterned light to illuminate the sample. Instead of shining a uniform
light on the sample, uses a series of striped or grid-like patterns. These patterns interact with the structures in the sample,
creating moiré fringes, which are patterns that contain high-resolution information. By capturing multiple images with
different patterns and angles, a computer can reconstruct a super-resolution image. The reconstruction process can
sometimes introduce artifacts. It works best with samples that aren’t too densely labeled, as overlapping signals can
complicate the reconstruction.
Photodamage/Phototoxicity: When fluorescent molecules absorb light, they can undergo chemical reactions that produce
reactive oxygen species (ROS) or free radicals. These ROS can damage the fluorescent molecules, causing them to lose
their ability to fluoresce (photobleaching, limits the time you can observe a sample) and can also harm the biological sample,
altering its structure and function.
• Use lowlight intensity that still provides a good signal. Reduce time the sample is exposed to light. Add antioxidants
to the sample to neutralise ROS. Use more photostable fluorophores that are less prone to photobleaching.
05_IN VIVO IMAGING_BIOLUMINESCENE_FLUORESCENCE
5.1. In vivo vs. ex vivo
In vivo imaging is done on living organisms, which means you can track changes over time in the same animal. This is
great because it reduces the number of animals needed for experiments and lets you see how diseases progress or how
treatments work in real-time. For example, you can watch tumour growth in mice using bioluminescence
Ex vivo imaging is done on tissues or organs that have been removed from the organism. This allows for very detailed and
high-resolution images, but you need to sacrifice animals at different stages to get these samples. It’s more controlled but
only gives you a snapshot in time, not the dynamic processes.
5.2. Background of Bio-optical Imaging
Bioluminescence: Bioluminescence is a natural phenomenon where living organisms produce light through a chemical
reaction. The chemical reaction involves the oxidation of luciferin, catalysed by luciferase (an enzyme that acts as a
catalyst), resulting in light emission.
Bioluminescence can be used to study infections related to biomaterials by tracking bacterial growth and spread, and in
cancer research, where bioluminescent markers are used to monitor tumour growth and the effectiveness of treatments in
live animals. For example, cancer cells expressing luciferase can be tracked in real-time to observe how they respond to
therapies.
• Fireflies, known for their glowing abdomens, use bioluminescence for communication and mating, while some
species of jellyfish produce light to startle predators or attract prey. Certain bacteria, like those found in the ocean,
emit light and can create stunning displays in the water.
• An interesting historical fact is the “Angel’s Glow” during the American Civil War, where soldiers with wounds
that glowed in the dark had a higher survival rate, later attributed to bioluminescent bacteria, Photorhabdus
luminescens, which produced antibiotics that helped prevent infections.
Fluorescence: When it comes to fluorescence characterization, it is about making invisible biological processes to glow.
• Micrococcal Nuclease Activatable Probe. Initially, the probe is non-fluorescent, but it lights up when it meets the
micrococcal nuclease enzyme. Once activated, the probe starts to glow, making it easy to see where the enzyme is
active. It is handy to visualise and measure the activity of micrococcal nuclease in real-time (this enzyme in DNA
repair, cell death, bacterial infections).
• Vancomycin Probe. Vancomycin is a specific antibiotic that binds to the cell walls of certain bacteria. A
fluorescent dye to the vancomycin molecule so that when the probe binds to bacteria, the fluorescent dye lights up.
It is used for identifying bacterial infections or studying how bacteria interact with host tissues and the effectiveness
of antibacterial treatments.
Challenges: First off, photons get absorbed (reduce signal intensity) and scattered (blur images) by tissues, which make
it tough to see what is happening deep inside. The signal detect on the surface depends on how deep the source is, with
deeper sources giving weaker signals. Then there is the issue of autoluminescence and autofluorescence (higher)—some
tissues naturally emit light or fluoresce, which can interfere with readings. Quantifying the light from probes accurately is
another hurdle due to variations in tissue properties and probe distribution.
Lago and Lago X are systems designed for high sensitivity and flexibility in bioluminescence, fluorescence, and X-ray
imaging. They can image up to 10 mice simultaneously with a 25 cm x 25 cm FOV for bioluminescence and fluorescence,
and an additional one (25 cm x 22 cm) for X-ray. They use high-intensity LED-based illumination, offering 100 times
lighter on target compared to traditional sources, with minimal background noise. The solid-state air-cooled cameras reach
-90°C, ensuring high sensitivity and stability without the need for liquids or chillers. They feature a heated imaging platform
to maintain animal comfort and are compatible with anaesthesia systems.
5.3. Bio-imaging optics
Lens apertures control the amount of light entering in the optical system.
Larger, better for low -light imaging.
The aperture affects the ‘circle of confusion’ which is the area where light
from a point source spreads. Increasing the aperture decreases the ‘depth of
field’, which can be useful for planar luminescence imaging. Signal
decreases quadratically.
Magnification is determined by the focal length and the distance from the
object to the lens. In macroscopic imaging systems is usually < 1.
𝑆2 𝐴
𝑚 = 𝑓
− 1 or 𝑚 = √𝐴𝐶𝐶𝐷
𝐹𝑂𝑉
𝑓
• f-number: determines the amount of light captured by the lens. A lower f-number, larger aperture. 𝑁 = 𝐷; D:
diameter of the lens.
• Quantifiyin fluorescence, light is emitted in all directions, but only a fraction is detected. This fraction is defined
by the “solid angle” = 2 (1- cos ) (unit: Str, steradian). The solid angle of a full sphere is 4π.
The total photon flux from a ROI in all directions (4π Str)
𝑁𝑒,𝑅𝑂𝐼 ∗ 4 𝑁𝑒,𝑅𝑂𝐼
𝑡𝑜𝑡 = =
𝐿𝐶𝐴 ∗ 𝑊 ∗ 𝑄 ∗ 𝑡𝑒𝑥𝑝 𝐿 ∗ 𝑡𝑒𝑥𝑝
𝑡𝑜𝑡
𝑅𝑎𝑣𝑔 =
4π ∗ 𝐴,𝑅𝑂𝐼
The size of the ROI is often determined based on the pseudo-colour image of the luminescent spot. A rule of thumb is to
take 10% of the maximum signal as the minimal intensity or “cut-off” intensity.
Line Profile: Detailed analysis of signal intensity along a line in the image.
Most tissue is autofluorescent up to the NIR (Near-Infrared) region, so it is advisable to avoid low wavelengths. Proper
nutrition (alfalfa-free rodent food) reduces autofluorescence. Subtracting a signal with a blue-shifted excitation wavelength
can also help.
If the fluorescence spectra are known, unmixing is straightforward. This involves separating the mixed signals into their
individual components based on their known spectral properties.
Sensitivity is the system’s ability to detect low light levels, for capturing faint signals from bioluminescent or fluorescent
sources.
• Quantum efficiency, which measures how effectively the CCD converts photons to electrons.
• Longer exposure times can enhance sensitivity by collecting more photons, though this must be balanced against
risks like photobleaching and motion artifacts.
• Readout noise, generated during the CCD’s readout process, can obscure weak signals, so lower readout noise is
better.
• Dark current, the thermal noise from the CCD even without light, can be minimized by cooling the CCD. Photon
shot noise, the statistical variation in detected photons, also affects sensitivity, with higher signal levels improving
it.
Resolution
• NA of the lens determines its light-gathering ability. Smaller pixel sizes on the CCD chip capture finer details.
• Higher magnification spreads the image over more pixels, increasing resolution but reducing the FOV.
• Optical aberrations, or imperfections in the lens, can distort images, so high-quality optics are used.
• Binning combines adjacent pixels to boost sensitivity but can reduce resolution.
5.5. Fluorescence Molecular Tomography (FMT)
FMT is a cutting-edge technique that creates 3D maps of fluorescent molecules within biological tissues. Three steps to
Quantitative.
Acquisition. A NIR light (laser) to scan (from multiple angles) excites fluorochromes in the tissue and measure both the
baseline absorption and the corresponding fluorescence profiles. You end up with a bunch of paired data maps from
thousands of projections.
Next, normalisation. All that paired data are processed to generate normalised fluorescence measurements by Born ratio
(ratio between the measured fluorescence field at each detector point, because of each excitation from source point). Feeds
the data into algorithmic models that account for tissue characteristics and surface boundaries.
Finally, reconstruction. This is where you generate a detailed fluorescence 3D map for each point in the subject, calculating
the fluorescence throughout the volume of interest. You need to know the concentration of the probe beforehand to
determine the fluorophore distribution.
Used for:
• Cellular processes in vivo, tracking disease progression, monitoring treatment efficacy, and understanding
biological pathways.
• Drug development, visualisation of drug distribution and targeting within the body.
• In cancer research, to see tumour growth and metastasis, and evaluate responses to treatments.
• When combined with other imaging modalities like CT or MRI, provides both molecular and anatomical
information.
5.6. Intra-vital imaging
A form of microscopy to observe biological processes in vivo at a high resolution. Allows to see individual cells within
tissues. The main challenges with this method are dealing with light scattering and absorption in the tissue, as well as
autofluorescence, which can mess with the clarity of the images.
To obtain transparency, one common method is by using tartrazine: a yellow-to-orange, water-soluble pigment that raises
the refractive index of tissues. This change in refractive index reduces light scattering, making the tissues more transparent
and making the images clearer.
Intravital imaging of engineered bone tissue to observe the dynamic processes of bone regeneration. Uses multiphoton-
intravital microscopy (MP-IVM), for real-time visualisation of cellular activities within bone tissue with high resolution.
To achieve this, an ectopic imaging window is used. This window is placed over a tissue-engineered construct implanted
in mice, for continuous optical access without the need for repeated surgeries. With this setup tracking of individual cells
over several days can be done and gives insights into processes like angiogenesis (formation of new blood vessels) and
osteogenesis (bone formation).
Zebrafish imaging leverages the natural transparency of zebrafish, especially during their early stages, to observe internal
biological processes in real-time. Without invasive procedures, using fluorescent markers to highlight specific cells, it is
possible to track disease progression, and tissue regeneration.
5.7. Intra-operative imaging
It is about using imaging techniques right in the middle of an operation to help surgeons see what is going on inside the
body. This tech helps surgeons making sure they are removing all of a tumour. With fluorescence, it is easier to spot
cancerous tissues.
In a groundbreaking first-in-human study, researchers used intraoperative tumour-specific fluorescence imaging to target
folate receptor-α (FR-α) in ovarian cancer. By a fluorescent agent that binds to FR-α, which is overexpressed in most
epithelial ovarian cancers. During surgery, this agent lights up cancerous tissues.
Ultrasound is used for guiding in soft tissues. MRI and CT less common (big size) but they give detailed images.
Photoacoustic imaging (combines laser and US). Raman spectroscopy uses light scattering to give molecular details about
tissues.
06_SAMPLE PREPARATION
6.1. How to be prepare for microscopy
Getting ready for microscopy, making sure the samples are in tip-top shape and you know exactly what you are looking
for. First off, know your goals. What are you trying to measure? Do you need 2D, 3D, or even 4D information? Pick the
right type of microscopy and prepare samples accordingly. Next, grab the samples. Whether you are working with tissues,
cells, or materials, you need to process them to keep their structure and reactivity intact. For tissues, this might mean fixing
and embedding them. For materials, make sure you standardise their concentration and aggregation. Setup optimisation.
Choosing the right microscopy technique based on what is needed to see. Think about resolution, sensitivity, size and
thickness. Always have controls. Process them the same way as the samples to spot any artifacts or misinterpretations.
Sample needs to be stable in it.
• Too high concentration when they are too many particles packed closely together, it becomes hard to tell them
apart. Tracking algorithm might jump back and forth between them.
• Too fast movement of particles, the imaging system cannot keep up.
• Not enough brightness, particles will not show up in the images.
• Bleaching, when particles lose their bright over time due to prolonged exposure to light.
• Salt crystals can be appeared in the image and be mistaken fo nanoparticles.
• Coffee ring effect when spreading particles on a surface forming ring-like patterns due to solvent evaporation
(preferably fast).
Use of clean solvents to avoid introducing contaminants that can create artifacts, proper washing techniques can help to
remove artifacts. But some or all the sample might be removed too. Always include controls to differentiate between real
signals and artifacts. For example, compare samples with and without the nanoparticles or with and without specific
treatments.
Fixing cells can preserve their structure but usually kills them.
• Cross-linking: chemicals like formaldehyde creates covalent bonds between proteins for ‘freezing’ the cell
structures in place.
• Precipitation fixation: alcohols (methanol or ethanol), precipitate proteins that disrupt native structure but making
proteins insoluble and fixed in place.
Staining and labelling to increase contrast. Depending on what you are looking at:
• Organic dyes: simple and biocompatible but can bleach, not very bright.
• Quantum dots: semiconductor particles, brighter and more stable than biocompatible ones. Large Stoke shift that
reduce background noise (the difference between the spectral positions of the maximal abs and emission spectra).
Smaller dots emit blue light, and larger dots emit red light due to quantum confinement, where the size of the
particle is smaller than the excitation Bohr radius (size dependant energy levels).
• Fluorescent proteins: GFP (Green Fluorescent Protein), can be genetically encoded and expressed.
• Nanodiamonds: Very stable and 2biocompatible, but quite large and not as bright as other options.
Antibodies for labelling. Specific as they bind to unique reactive proteins. Variable region (binds to the specific antigen,
target protein) and the constant region (recognised by antibodies or other detection methods).
• Primary antibodies bind directly to the target antigen and are chosen based on their specificity to the protein of
interest.
• Secondary antibodies bind to the constant region of the primary antibody, and they are often conjugated with a
fluorescent dye, enzyme, or other markers to make target visible under a microscope.
When a primary antibody is directly conjugated with a dye or marker (direct labelling). When a primary antibody binds to
the target, and then a secondary antibody with a fluorescent label bind to the primary antibody (indirect labelling). This
way, the signal is amplified, and sensitivity increased.
Biotin/Streptavidin: The binding between biotin (a vitamin also known as B7) and streptavidin (a protein from the
bacterium Streptomyces avidinii) is one of the strongest non-covalent interactions known. Biotin can be attached to
antibodies, enzymes, or nucleic acids, and then these biotinylated molecules can be captured or detected using streptavidin.
Fluorescent proteins naturally glow when exposed to certain wavelengths of light. The most known one is GFP originally
discovered in jellyfish Aequorea victoria. Since then, scientists have developed a whole palette of fluorescent proteins in
various colours by modifying GFP. The spectrum ranges of each fluorescent dye are not sharp, distinct lines but rather
broad peaks with tails that extend into other wavelengths (spectral overlap). The tail (lower intensity light emitted at
wavelengths just outside the main emission peak) overlap with the emission spectrum of another dye. The overlap can be
detected as a signal from a second dye. Filters can discriminate between the emission spectra of dyes.
• EM samples must be somewhat conductive, to prevent build-up of electrons, which can cause artifacts. Non-
conductive samples are often coated with a thin layer (gold/palladium alloy): Sputter coating
• EM is in vacuum, which is problematic for liquid samples as they can evaporate or change state. Cryo-EM to handle
such samples.
• EM is somewhat destructive, can damage samples (the higher the magnification the bigger this effect is).
• Magnetic particles interfere with the beam and distort image.
• Heavy elements are visible best as scatter electrons more effectively (more visible in EM). This is why staining
with heavy metals is often used to enhance contrast.
• Samples often need to be very thin (less than 100nm for TEM) to allow electrons pass through. For SEM coated
with a conductive material to improve image quality.
Staining and labelling for different samples
• Negative staining, an electron-dense stain is applied to the sample. The stain surrounds the specimen but does not
penetrate it, creating a dark background while the specimen appears lighter. It is easy to perform and offers high
resolution. Is used for imaging viruses, bacteria and protein complexes.
• Positive staining uses that bind directly to the specimen, improving the contrast of specific structures within the
sample. Used cells and tissues, for highlighting components like membranes and organelles. It gives details of
internal structure.
• Immunogold labelling Uses antibodies conjugated t gold particles to label specific proteins or antigens within the
sample. The gold particles are electron-dense, visible under TEM. Used to locate proteins as it is highly specific.
• Heavy metal staining. These metals bind to various cellular components. High contrast.
Preparation for cryo-EM: to generate high resolution protein structure. Typically, avery small volume (about 2-5
microliters) of the sample, which is then spread into a thin layer on a holey carbon grid. This grid is then blotted to remove
excess liquid, leaving a thin film of the sample. Then the sample is rapidly frozen by plunging the grid into liquid ethane
cooled (below –140°C) by liquid nitrogen (vitrification). The rapid freezing prevents the formation of ice crystals, which
can damage the sample and obscure fine details. Instead, the water in the sample forms a glass-like, amorphous ice that
preserves the native structure of the biological material. After vitrification, the grids are stored in liquid nitrogen to maintain
their frozen state. During imaging, the grids are transferred to the electron microscope while still frozen, using specialized
holders that keep them at cryogenic temperatures.
Preparation for Focused Ion Beam/Scanning Electron Microscopy (FIB/SEM) for obtaining high resolution images and to
analyse the topography and composition of samples in a nanometric scale.
07_ELECTRON MICROSCOPY
Sample preparation EM
• SE: produced by inelastic scattering, have low energy, and come from the sample’s surface, good for high-res
images of surface topography.
• BSE: result from elastic scattering, have higher energy, and bounce back from the sample. They give compositional
contrast as heavier elements scatter electrons more effectively.
• XRs: generated when inelastic scattering ejects inner-shell electrons and outer-shell electrons fill the gap, emitting
characteristic XRs that are used for elemental analysis via EDX.
• Auger electrons: emitted when energy from an electron transition ejects another electron (Auger effect), and there
are use for surface analysis.
Large scale EM. Used for capturing large areas of samples at high resolution. With FAST-EM (multiple beams to rapidly
acquire large dataset).
Nanotomy: Anatomical study of tissues and cells using EM. Provides an overview and HR images of samples. For an
unbiased examination. Promotes the sharing of raw data through platforms (nanotomy.org). It is capable of imaging rare
events that might be missed with smaller scale techs.
Correlated Microscopy (CLEM)
Correlated Light and Electron Microscopy (CLEM) combines the
strengths of fluorescence microscopy and electron microscopy. FM is
used for quickly identify large areas, while EM zooms in for high-
resolution images of specific spots. With this combination cellular
structures can be study at a macromolecular level.
• Decreases phototoxicity
• Ultra-fast acquisition times
• Optimal excitation and detection
• Potentially ideal for SUperNova ablation.
08_IMAGE PROCESSING
Micrograph/Image: Consistent visual representation of information about an object
Digital image acquisition: to convert what we see in real world into digital format so that a computer can understand.
Turns the continuous visual information into a set of discrete digital values. Digital image can be then processed by SW.
Sampling: digitalisation of spatial coordinates, like taking a grid of points from an image. Each point where the grid lines
intersect is sampled, capturing colour and intensity. The finer the grid (higher sample rate), more detail. A microscope has
a lateral spatial resolution (dr) and magnification.
• Nyquist theorem: For perfect reconstruction, the sampling frequency has to be at least doubled the maximum
frequency in the original signal
Quantisation: digitalisation of intensity amplitudes, assigning a specific value of intensity at each point. Converts the
continuous range of pixel intensity values into a limited set of discrete levels. 8-bit, 256 levels.
Colour: RGB and CMYK. Each pixel has colour values that combine to create the final colour.
Image visualisation. LUTs to map pixel values to specific colours. Histograms for HiLo, underexposed (low intensity,
blue)/ overexposed (high intensity, red).
• How to avoid under and over exposure? Match minimum and maximum light intensity inside the detector's dynamic
range.
Image files formats. Compression reduces the file size by manipulating the bitmap, resulting in a different binary file.
• Lossy compression reduces file size by permanently eliminating some data, can affect image quality (JPEG).
Lossless compression reduces file size without losing any data, preserving the original image quality (PNG).
Image processing example: Segmentation. Breaking down image into exclusive regions where the pixels satisfy a defined
property
1. Split channels based on RGB. Isolating specific features of each colour.
2. Subtraction of background intensity from each channel, to highlight the main features and remove noise
3. Contrast enhancement. Gamma transformation adjusts the brightness levels (𝛾 <1: increase, 𝛾>1: decrease)
4. Gaussian blur applied to smooth and reduce noise. Gaussian kernel by convolution.
5. Binarisation, each pixel either back or white. Setting threshold value.
6. Morphological transformation to refine the binary image. By erosion white pixels from the edges of objects are
removed (if more than n adjacent pixels are black) and dilatation adds pixels to edges (if more than n adjacent pixels
are white).
7. Boolean operations (XOR) to combine regions, adding another layer of refinement.
8. False colouring to stand out different regions and create composite images by combining multiple segmented
images.
Linear filters. Convolution of kernels (smooth, sharp, edge detection)
Image restoration. Deconvolution by the inverse of the degradation function. Reverse convolution that caused the blur.
Distance. (in px)