Monday, November 28, 2016

Week 5 - Fluorescence microscopy

In week 2 we built a simple light microscope and imaged neurons (fig 1). A glance at the image produced shows three key limitations of this approach: we lack contrast between our neuron of interest and the background; we observe a static readout of a dynamic structure; and we view the brain in slices, rather than in its complete form. A combination of fluorescence and genetic engineering have produced techniques to circumvent these limitations of microscopy.


Fluorescent molecules absorb light of a given wavelength, which excites electrons into a high energy state. When these electrons relax, they emit light at a different, but specified, wavelength. Fluorescence microscopy utilizes these molecules by shining light at their absorption wavelength, and detecting light at their emission wavelength. If these molecules can be confined only to cells of interest, this produces a nearly infinite contrast between the sample and the background. Confining fluorescent molecules to cells of interest can be done in many different ways, but in many cases it relies upon advances in genetic engineering.

Famously, green fluorescent protein (GFP) was first discovered in glowing jellyfish in Friday Harbor. When the molecule absorbs (blue) light with a wavelength of around 488 nm, it undergoes a conformational change and emits (green) light with wavelengths around 509 nm (see figure 1). Its potential for applications in biology became apparent when Douglas Prasher and Martin Chalfie managed to managed to clone its nucleotide sequence and to express it in E. Coli and C. Elegans 1994. In neuroscience, GFP fused with Calmodulin (CaM) has proven to be an excellent marker of neural activity: when calcium binds to this complex, it undergoes a conformational change, enabling the absorption of light at the excitation wavelength. This is extremely useful, as it provides us with a method for monitoring neural activity by observing calcium transients through a microscope.

Figure 1. Emission and excitation spectra of our set-up. Because the emission and excitation spectra of GFP overlap (blue and green lines), we used filters whose transmission bandwidths are shown in transparent blue and green. Lastly, the dichroic mirror (black line) transmitted the emitted (green) light from the sample, while reflecting the blue excitation light from our photo diode. Filter transmission data were retrieved from and GFP excitation/emission spectral data from

Designing the microscope

In order to exploit these properties of GFP for microscopy, the design of our microscope has to satisfy a few key characteristics that we summed up in figure 2.

Firstly, to avoid contamination, we needed a way to make sure that there was no overlap between the light used for excitation and the light detected by our camera. As the excitation/emission spectra of GFP show some degree of overlap (fig 1), we placed a band-pass excitation filter in front of our LED light source to ensure only blue light (wavelength 469 ± 17.5 nm) would be sent into the sample. A green emission filter (525 ± 19.5 nm) was placed in front of our camera to ensure that all detected light came from fluorescence emitted by the sample, rather than from our LED. The emission filter undeniably discards much of the fluorescent light coming from the sample. However, since there is virtually no light that is coming from other sources than the sample, a very good signal to noise ratio can be maintained.

A second goal was to separate the paths of the excitation and the emission light. To achieve this, we placed a dichroic mirror between the tube lens and the objective. This dichroic mirror transmits light of wavelengths longer than 500 nm and reflects most of the light below that wavelength (figure 1), allowing us to avoid any excitation light hitting the camera.

A third, not entirely insignificant change to our microscope from week 2 entailed replacing the dismantled consumer webcam with a high-performance, highly efficient scientific CMOS camera. This camera has a quantum efficiency of 82%, meaning that 82% of the photons hitting the light-sensitive surface of the camera will produce an electric response.

Figure 2. Schematic representation of our fluorescence microscope 

Figure 3. The fluorescence microscopy set-up. 


At the end of the week, we tested our fluorescence microscope by imaging zebrafish larvae that were provided by Elena Dreosti. These zebrafish expressed GCaMP, so that their neurons showed green fluorescence whenever the calcium concentration in their neurons increased. The fish were fixed in a gel, such that they could not move, and we imaged their neural activity in vivo by using a 16x magnification objective that can be immersed in water.

The results of this imaging can be seen in this video (turn on HD for better resolution):

Neural activity in zebrafish tectum from Jesse Geerts on Vimeo.


By using fluorescent proteins and a combination of different wavelength filters and a dichroic mirror, we were able to build a fluorescence microscope that overcomes many of the limitations we saw in week 2; capturing only green light ensured that all light that hit our camera came from the neurons, solving most of our contrast problem. Moreover, this meant that we were able to look at a zebrafish brain in vivo and in real time, which was a major improvement from the static readout from a brain slice.

However, our microscope has some disadvantages that become apparent when looking at the video; GFP is expressed also in neurons outside the focal plane, and the excitation light will also hit these neurons, and they will fluoresce as well. Our wide field of view captures this light, meaning that part of our signal is polluted with light from outside the focal plane. As we will see in the next blog, this problem can only be circumvented by looking at a single point in the sample at a time.

Monday, November 14, 2016

Weeks 3-4 Electrophysiology

The CMOS revolution Complementary-symmetry Metal Oxide Semiconductor (CMOS) circuits form the basis of modern electronics because they make it possible to perform logical operations without drawing any stable-state current. Such integrated circuitry has facilitated the development of extremely small and efficient technology, including microprocessors, static RAM and microcontrollers. For example, the technology that produced smartphones would arguably not have been possible without CMOS integrated circuitry. This technology is likewise transforming neuroscience, particularly in terms of equipment for electrophysiological data acquisition, which we will focus on in this week’s blog.   CMOS probes CMOS tech. has enabled the fabrication of high density multichannel silicon-based probes, because large numbers of recording sites can be sampled in series with only a few wires. These probes are perhaps the best available tools for obtaining action potential firing patterns of large groups of neurons in live animals. These probes consist of a ~10mm shank with up to thousands of recording sites distributed along it. Each recording site detects local voltage fluctuations relative to preselected reference sites. The large number of recording sites and the density of their arrangement along the probe makes it possible to acquire local voltage changes in the brain with excellent temporal precision and spatial resolution. This has enabled accurate spike sorting of a much larger number of neurons than was previously possible.   Experimenting with NeuroSeeker probe Our assignment this week was to plan and help execute an experiment using one such multichannel probe, the ‘NeuroSeeker’ probe (more info: Joana Neto is currently characterising the probe as part of her PhD and she kindly allowed us to join her experiment this week. Experienced members of Adam Kampff’s group (Joana Neto, Joana Nogueira, George Dimitriadis and Lorenza Calcaterra) carried out surgery and handling of the probe. The NeuroSeeker probe has 1440 sites in total - 1336 recording sites and 96 ground reference sites. These sites are 20µm x 20µm and are arranged in 4 columns, evenly distributed (2.5µm spacing) along the shank of the probe.   Probing the auditory system - the plan In addition to learning how high-density probes work we had two experimental objectives:
  • to acquire and analyse data obtained with a probe
  • to address a scientific question of our choice within the auditory cortex
We wanted to learn something about auditory processing across cortical columns in the auditory cortex and planned to insert our probe into layer 5 of the auditory cortex parallel with the layer such that it would span a reasonable number of neurons with similar preferred frequencies (Figure 2). brain_schematic Figure 2) schematic depicting our planned experiment, with a Neuroseeker probe inserted roughly parallel with the deep layers of the auditory cortex. We designed auditory tests to investigate a couple of questions that we wanted to address:
  • Is neuronal stimulus-specific adaptation (SSA) uniform across frequency-matched neurons in auditory layer 5 neurons?
  • Is bandwidth encoded along the dorsal-ventral axis of the auditory cortex?
  • Do all neurons respond the same way to harmonics?
  Probing the auditory system - the experiment We inserted the probe gradually into the region of interest and found a region with a lot of spontaneous activity. We were able to visualise this activity across all the channels simultaneously using a Bonsai plugin in real time along the probe during the experiment that is shown in the video below! <video> Unfortunately for a variety of reasons we did not find a population of neurons suitable for our auditory tasks, but we got a lot of spontaneous recordings that we could start to analyse. Hopefully we will get another chance to attempt this in the next couple of weeks (watch this space for updates).   Analysis Adam encouraged us to start by investigating the data from first principles (on a very limited timescale) to get a feel for the data and better understand the challenges that automated spike-sorting entails. We focused on the 480 channels that seemed to have the most activity and applied a simple threshold criterion to identify regions containing putative spikes (figure 3A). The minimum value in this range was labelled as the ‘event location’ and the subset of the data was visualized as a raster plot (figure 3B). We identified most of the events that were visible by eye, but obviously such a crude technique has a large number of false negatives and false positives. We nonetheless took the sum of all detected events in each channel as a first approximation. Surprisingly, even with this very crude method it is possible to see some structure in the data arising from different neurons. Later, Jorge Aurelius from the Gatsby Unit might try to use these data to find clusters, using expectation-maximisation, an algorithm that we discussed in the machine learning class. detection_probe_blog Figure 3) A) 10s of raw traces with events crossing the threshold (dotted line) marked with a black circle. B) Each event in the same time window is shown as a raster plot, and the total number of spikes detected in each channel over the entire recording is plotted as a heat map in C.   Spike sorting Modern analytical methods are far more sophisticated than our approach. In fact, spikes detected by multi-channel probes can be allocated to their neuron of origin on the basis of waveform shape (mostly amplitude), which varies as a function of distance and orientation from the probe (figure 4). The action potential outputs of hundreds of cells can, therefore, be simultaneously sampled and allocated to their neuron of origin. Analytical tools such as KiloSort <link> now support the near real-time sorting of action potentials from hundreds of cells at once. probe_event_example Figure 4) Schematic showing how single action potentials from two cells can be allocated to their cell of origin. Left shows what a region of electrodes might detect in response to a single cell positioned near the probe. Right shows how two cells of slightly different positions might appear. The spatiotemporal pattern of events on different channels can then be used to group similar events that are likely to arise from the same cell. Joana sorted the spikes with KiloSort and found 62 different well-isolated neurons contributing to the data. An example of one of these identified clusters can be seen in the figure below.   Summary Overall for this part of the course we managed to acquire some exciting data – the first obtained by the lab with this configuration of the NeuroSeeker probe. We didn’t manage to test out our auditory experiments yet but we nonetheless have plenty of spontaneous data to tackle, which has allowed us to explore both rudimentary and state-of-the art analysis methods.

Week 2: Optics and microscopy

The use of light in neuroscience extends back to Camillo Golgi and Ramon y Cajal’s disputes on the structure of the nervous system, with Golgi believing it comprised of a large continuous fibrous net, and Cajal arguing it's formed of many non-continuous individual cells. Cajal won this argument, and was likely on the right side of history due to having a superior microscope. The importance of quality microscopes has only increased since, and has undergone a new revolution in the past 20 or so years. In the spirit of Golgi and Cajal, this week in the Experimental Neuroscience course at the SWC we aimed to build a microscope capable of visualising neurons. Microscopy 101 Light can be considered to be both particulate in the form of photons, or wavelike; we focused on the macroscopic wavelike properties of light due to the scale on which we were working. On this scale, light can be described as being an electromagnetic wave with a speed that depends only on the medium through which it is travelling, and a variable wavelength and frequency (which are interrelated). Visible light, which we will be dealing with to build our microscope, has a wavelength in the range of 400-700nm. Microscopy manipulates these features of light to enable visualisation of microscopic structures, such as neurons. Designing a microscope Conventional light microscopy makes use of the fact that the speed of light depends on the medium through which it is travelling. Thus, when light passes from one medium to another, its speed changes. As light can be considered as a wave, if light is shone into the change in medium at an angle, then one ‘side’ of the wave will change speed before the other, causing it to change direction. By analogy, if the right hand wheel of a car hits a puddle in the road, the right hand side of the car slows down, causing the car to turn right. The degree of turning, or ‘refraction’, depends on the relative speed of light between the two mediums, known as the index of refraction. Lenses make use of this ability to manipulate the direction of light. By adjusting the angle of incidence and the refractive index of material used, the direction of light can be controlled. This is achieved by adding a curvature to the surface of glass. A key feature of lenses is their focal length, which determines the distance at which light entering the lens will converge to a single point, and hence the size of an image at a given distance from the lens. As lenses use the curvature of their surface to bend light, the focal distance is largely determined by the curvature of the surface of the lens. This plays a crucial role in deciding where to place lenses relative to each other and the sample being imaged. The first step of the microscope is to shine light through the sample. For this purpose, we used an LED light source. However, some light sources produce collimated light, which prevents an image from forming. To ensure our light was not collimated, we used a diffuser prior to light entering the sample, meaning light hit our sample at multiple angles. To reduce contamination from stray light hitting parts of our sample other than our region of interest, we introduced an aperture after the diffuser, but before the light source. Thus our sample is subjected to non-collimated light illuminating only a small region of interest. Having passed through the sample, the light then passes through an objective lens placed 1 focal length away from the source, with the aim of collecting as much of the light from the sample and collimating it. This light then passes into our second lens, which bends the light such that it converges onto our image collection point. To collect our image, we used a webcam with the lens removed placed after the second lens at its focal point, thus providing a photosensitive chip that could easily be connected to a computer. [caption id="attachment_162" align="alignnone" width="1806"]Figure 1 - Schematic diagram of microscope Figure 1 - Schematic diagram of microscope[/caption] [caption id="attachment_164" align="alignnone" width="1124"]Figure 2 - Opening the components of the microscope Figure 2 - Opening the components of the microscope. Left: Before. Right: After.[/caption]   [caption id="attachment_165" align="alignnone" width="783"]Figure 4 - Microscope setup Figure 3 - Microscope setup[/caption] Results Having set up our system we attempted to produce images from different samples at various magnifications. During testing and alignment, we used a slice of rat brain from Kampff lab. Our first successful production of an image can be seen in figure 4, where it is projected onto the forehead of SWC PhD student Jesse Geerts. [caption id="attachment_163" align="alignnone" width="399"]Figure 3 - Image of a brain slice projected onto Jesse's head Figure 4 - Image of a brain slice projected onto Jesse's head[/caption]   We next increased the magnification and replaced Jesse’s head with our modified webcam. As biological tissue can be hard to image, we tested our system by imagine a cloth, before moving onto imaging biological tissue (figure 5).  We successfully imaged populations of neurons, and were able to identify a single neuron filled with biocytin (figure 6). We therefore succeeded in our aim of building a microscope capable of visualising neurons. [caption id="attachment_166" align="alignnone" width="1016"]Figure 5 - Left - Image of a cloth (left) and Golgi stained brain tissue (right) at high magnification. Figure 5 - Left - Image of a cloth (left) and Nissl  stained brain tissue (right) at high magnification.[/caption] [caption id="attachment_167" align="alignnone" width="571"]Figure 6 - Image of a biocytin filled neuron (denoted by arrow) Figure 6 - Image of a biocytin filled neuron (denoted by arrow)[/caption]   Conclusions By using a very simple configuration of lenses, in a relatively short period of time we made a microscope capable of visualising neurons and produce a digital image. With some minor adjustments we could adapt this microscope to use fluorescence, or with a different configuration of lenses produce a higher magnification.

Week 1: Electrical Circuits

This is the first in a series of blog posts about the progress we make in the Experimental Neuroscience Course at the SWC. Throughout the term, we aim to learn to build a wide range of tools required to run a neuroscience lab, while updating on our progress using this weekly blog. This week, we took the first steps towards an attempt at recording electrophysiological signals. The physiological signals we aim to measure -- electromyography (EMG) or electroencephalography (EEG) -- are rather small in amplitude, and direct measurements are often very noisy. Therefore, we first focused on building filters that can deal with noise in specific frequency bands. Second, to account for the small size of the signals, we explored electronic amplification. Building a frequency filter The non-linear charging of capacitors can be used in series with a resistor (RC circuit) to filter high and low frequency signals. The frequency threshold can be selected as it depends on the rate at which the capacitor charges and discharges. This in turn is related to the time constant $\tau$ (tau): the time taken for the charge stored to reach $\frac{1}{e}$ of the original value. As $\tau = RC$, we can tune discrete electrical components to set the threshold for our filter. Relating $\tau$ to frequency gives the following equation:
$f_c = \frac{1}{2 \pi R C}$
The order in which the resistor and capacitor are set up and the point at which voltage is recorded determines whether our filter outputs only high or low frequency components (i.e. whether it’s a high or low pass filter). By placing the capacitor before the resistor and taking our output from between these two components, the capacitor will rapidly charge and then not allow any more current to flow to our output, meaning it will filter for high frequency components of the input. Conversely, when the capacitor is placed after the resistor and output, high frequency input will fail to fully charge the capacitor, preventing the high frequency signal to flow into the output. However, low frequency input will lead to the capacitor being fully charged, preventing current flow to the output, and therefore act as a high pass filter. Thus, by adjusting the configuration of the circuit, we can either produce a low or high pass filter. highpasslowpass   By combining our low and high pass filters in series, we also aimed to produce a band pass filter, where only a range of signals between two frequencies is able to pass. With the right combination of resistors and capacitors, this will allow us to filter out all non-relevant signals above and below the frequency range for (neuro)physiological signals. Operational amplifiers The physiological signals we aim to measure have amplitudes on the order of $10-100\mu V$ (EEG) or 20 mV (EMG). Furthermore, we observed that the filters we built attenuate the amplitude of the signals even more, which means that amplification will be necessary. Therefore, the second aim of this week was to build an operational amplifier (op-amp). The basic principle of electronic amplification is a circuit that makes clever use of the properties of resistors and transistors. In the circuit shown in the figure, a positive voltage is applied at $V_{CC}$, and a negative voltage is applied at $V_{EE}$. Since the resistor at the tail of the circuit is much larger than the two parallel resistors at the top, the current flowing through this circuit is determined by the potential difference and this large resistor (following Ohm's law). If the input voltages ($ V^+_{IN}$ and $V^-_{IN}$) are equal, this means the current flowing through both arms will be equal, and the voltage measured at $V_{OUT}$ will be 0. When either one of the input voltages surpasses the other, more current will flow through that transistor, causing the measured voltage on the contralateral side to rise. The measured signal $V_{OUT}$ thus reflects the amplified difference between the input signals: $V_{OUT}=A(V^+_{IN}-V^-_{IN}) $, where $A$ is the gain of the amplifier. It should be noted that the circuit shown in the left figure is the simplest, original example of a differential amplifier, and it is dependent on the two transistors having exactly the same properties. Therefore, modern amplifiers use more sophisticated circuits. longtailedpairfbloop   This gain, which is typically on the order of 100,000, can be regulated with a negative feedback loop as shown in the right figure above. When (a proportion of) the output voltage is applied back into $V^-_{IN}$, the amplifier will drive the output voltage to whatever level necessary to keep the differential voltage between the inputs to zero. Thus, when a voltage divider is used to apply a proportion of the output voltage to the $V^-_{IN}$ port of the amplifier, one can use the relative sizes of the resistors to adjust the gain of the amplifier: $A = \frac{R_1}{R_2} $ Outstanding difficulties At the end of the week, we tried to apply the information mentioned above to record some muscle activity using an EMG electrode and an op-amp with negative feedback. We displayed the output voltage on an oscilloscope, but observed mainly noise. Next week, we will attempt to build a more sophisticated circuit, using pre-amplification and filtering. picture1