These are notes on a Coursera course you can find here. It's called 'Synapses, Neurons and Brains', but I generally call it 'Synapses, Neurons and Brains, Oh My' for the craic.
History:Camillo Golgi and Ranon Y Cajal around 1900 started neuroanatomy. They stained brain tissue with a silver-based method that stained only about 1% of cells, so rather than a mess of dye, they saw delicate cobwebs of neurons: the first glimpse of neural networks. They couldn't see synapses. Golgi thought the brain was not made of cells. Ramon Y Cajal differed.
Connectomics:Take wafer-thing slices of brain tissue (on the nanometer scale). Scan each slice with an electron microscope. Stack these 2D slices back into a 3D shape. This creates a 3D model, complete at the level of networks/synapses/connexions, of that chunk of the brain. The goal is to understand the connexion between this 3D connectome model and behavior. How do these networks cause behavior, or cognition, or mood?
The 'brainbow':The brain is grey. It is uniform in color. The brainbow overcomes this by dyeing them fun colors. Developed recently by a group from Harvard. They genetically insert pigments. Not fine enough to see synaptic connexions, but you can see a strand of, say, blue tissue going towards green tissue. (So connectomics is at a smaller scale than the brainbow.) The brainbow can tell us how learning changes the structure of the brain. Because it is a genetic intervention, it can tell us what genes are expressed in what areas/systems. If we make a particular cell type purple, then the brainbow technique will let us see where that type is. It is useful for discovering the long strands that go from one side of the brain to the other.
Optogenetics:Use genetic manipulation to implant light-sensitive things into neurons. The retina naturally has light-sensitive receptors, so you're turning other brain cells into retina-type, light-responsive cells. They transform photons into electrical activity. Then you use fiberoptics to beam light at these mutant cells and they fire. A particular ion channel called rhodopsin fires when exposed to blue light. (We can confirm this by recording the cell's activity with a microelectrode.) Natronomanas pharaonis is sensitive to yellow light. But when exposed to yellow light, it DOES NOT fire. Inhibitory effect. This is very specific control: a specific cell type, in a specific region, can be turned either on or off as we wish. Researchers (led by Karel Svoboda) have controlled a mouse's behavior in this way: made it drink or stop drinking. About 100 neurons were under control.
Brain-machine interface:To build a brain-machine interface, we need to understand the electrical language/electrical code of the brain in real time. Poke a tiny microelectrode into a particular neuron. Read the electrical activity, the highs and lows, going on in it. Do this with a bunch of neurons, and you'll get the music, made of multiple beats, that represents an image, or an intention, or whatever, depending on the region of the brain. Read this concert of electrodes, send it to a machine. The machine controls a robotic arm if you're paralyzed, or a toy helicopter if you're bored. A monkey has successfully used a robotic arm to feed itself using this method. (Research by Andrew Schwartz of University of Pittsburgh) The other direxion is from the machine to the brain e.g. cochlear implants By recording the basal ganglia's activity with microelectrodes, we can see that in Parkinson's the pattern is very wrong. An implant can be put into the basal ganglia and stimulate the basal ganglia with a more normal, healthy, electrical pattern. (A battery is implanted in the chest.) This is an effective clinical intervention. In the future, we want to use nanotechnology to make a long-term implant that will record the electrical activity in the brain, and can wirelessly transmit recordings, or wirelessly stimulate. The software also needs to be good enough to interpret the signals in real time. Obama's brain mapping project aims to develop the algorithms to read millions of electrodes simultaneously. Sense input could be inputted by implanted electrodes (in the future). Prosthetics could have sensitive skin.
Blue brain project:in silico simulation/modelling of neuronal circuits using an IBM supercomputer. A full mathematical model of the brain. Record the spiking activity (electrical activity) of neurons, and write an equation that recreates that. This is a mathematical model of one neuron's firing activity. Put a whole heap of these simulated neurons together, link them up, and you have a virtual model of a system. This has been done for about 10,000 cells in about two cubic millimeters of cortex. (There are about ten million times as many neurons in the human brain.) We could make computer models sick with Parkinson's, Alzheimer's etc., then make them well again.
Theory of dynamic polarization:The cell receiving the information becomes electrically polarized, and passes on that polarization. Dendrites are input devices. Axons are output devices. Ramon Y Cajal got all this right. (Though he did not know that there are inhibitory and excitatory inputs.) Golgi wrongly thought that neurons were not discrete; he thought there was a continuous strand along which information flowed. In 1906, receiving the Nobel prize, they presented their differing views. The axon has varicosities (aka boutons) on it. These contain neurotransmitters. One axon might have 5000. The dendritic tree of a given neuron has many many axons touching it. When a neurotransmitter pings across these synapses, it creates an itty-bitty change in voltage. This itty-bitty change in voltage is called the synaptic potential. There are also axons touching the dendritic tree that ping with neurotransmitters and REDUCE the voltage. These are inhibitory presynaptic neurons. All these plusses and minuses come into the dendritic tree (i.e. the cell's input jack). They all sum up and their resultant voltage reaches the cell body of the postsynaptic neuron.
Axon as output deviceThe axon is the output device. The exit from the soma is called the axon initial segment. This segment consists of special ion channels that enable the firing of an action potential. Off the action potential goes down the axonal tree. Along the axon are nodes of Ranvier. Between these nodes are internodes. The internodes are wrapped in myelin sheathes. Myelin is a lipid. The nodes of Ranvier are uninsulated by myelin - analogous to a bare piece of copper with the insulation stripped off on a wire. The nodes of Ranvier are similar to the axon initial segment in that they are electrically hot with ion channels that can boost the signal. At the end of the branches of the axonal tree are varicosities. These are the launch pads that send neurotransmitters across synapses. There is no myelin here because you need an uninsulated point to transmit the signal. Where there is myelin (i.e. at the internodes), there are no synapses; and where there are synapses there is no myelin. At the internode, there are special cells that wrap myelin around the axon. These cells are called glial cells or oligodendrocytes. They are the insulators of the brain. They might wrap hundreds of layers of myelin around the axon. Most - but not all - nerve cells are myelinated. (Multiple sclerosis is where myelination is sub-par and signals don't propagate well.) Note that only axons have myelin, not dendrites. Nodes of Ranvier are a few microns wide. They are gaps in myelin. They are excitable. What makes them excitable? Ion channels. Their signal-boosting power allows signals to propagate. Axonal trees can branch locally, or they can branch distally (reaching across the brain). The diameter is about a micrometer, a thousandth of a millimeter. So the axon is an output device that generates a signal (initial segment), conducts it (axonal tree), insulates it (myelin/glial cells), boosts it (nodes of Ranvier), and passes it along (boutons).
Dendrite as input deviceDifferent cell types have differently-shaped dendritic trees. Each dendritic tree will receive inputs from many axons coming from many cells. Axons are interwoven with dendrites. Some cell types have spines on the dendrites. These dendritic spines are where axons dock. You can categorize cell types as spiny and non-spiny. Typical numbers for a cell: dendritic area = 20000 microns2, dendritic spines = 5000-8000 (Pulkinje cells have 200000), area of one spine: 1 micron2, number of axonal inputs=10000 50-60% of the area of the cortex is dendrite. Axons are longer, but dendrites are thicker, have more surface area.
Types of neuronsWe've already seen spiny and non-spiny, excitatory and inhibitory. But the categorization is more complex than this. There are about 100 billion neurons. We can classify neurons -
The SynapseA synapse is the gap where a presynaptic axon almost touches a postsynaptic dendrite. The varicosty/bouton of the axon contacts the dendritic spine. In the axon's boutons at the synapse, there are vesicles containing neurotransmitters. There might be 5000 molecules of a neurotransmitter. On the postsynaptic part (the dendrite), there are receptors for these. A spike (which is digital) causes the vesicles at the varicosity to cross over. When the receptors receive the neurotransmitter, they pass on a voltage to their cell body. This is analog. The synapse can be described as a digital-to-analog converter. There are strong synapses that generate large-amplitude voltages in the dendrite, and weak ones that generate smaller voltages. Axon: digital voltage. Gap: chemical signal. Dendrite: analog voltage. An example spiny stellate cell in layer 4 of the cotex receives: 1430 inputs from others the same as it, 3105 inputs from layer 6 pyramidal cells, 355 inputs from local smooth cells, 360 from synapses from faraway in the thalamus. Some of these are excitatory and some are inhibitory. At the cell body, all the inputs are summed. The axon initial segment then decides if it reached the threshold. If it did, the neuron fires. If it did not, it does not.
SpikeSpike is an all-or-nothing thing happening in the presynaptic axon.Let's look at why it's all-or-nothing. Hodgkin&Huxley were the guys who figured out a model for the spike, using the space clamp and the voltage clamp method. They won the Nobel Prize in 1963. They studied the squid because it has big axons, about half a millimeter thick, rather than the few microns of ours. Working pres-WW2 A neuron has synaptic potential in its dendrites, spikes in its axons. Synaptic potential generates spike. H&H poked an electrode into those big squid axons. What they saw: At rest, the inside of the axon is more negative than the environment. At the spike, it becomes suddenly positive. (After, there is a period where it is more negative than at rest.) This lasts about a millisecond, varying a bit depending on temperature. H&H, in 1952, came up with four equations that describe the spike. If a certain current comes from the dendrite, it dissipates. But once the quantity of current hits a threshold, instead of dissipating, it goes up even more. This threshold is about 10mV above rest. (So if you take it from -70mV to -60mV, it'll suddenly go up to +70mV or something.) What machinery in the axonal membrane causes this spike to occur? H&H developed two techiques to study the axon: the space clamp and the voltage clamp. The space clamp makes the axon isopotential (i.e. makes the voltage along the whole axon's length the same by putting a good conductor in there). The voltage clamp is a more sophisticated method that fixes the voltage inside and outside the membrane of the axon at a chosen value. The voltage clamp is a feedback system; it injects current to exactly counterbalance the membrane's own current. It measures how much current it is injecting, and by that you know the voltage. If you set a voltage clamp to hold the voltage above the neuron's threshold, current flows first *into* the neuron, then *out*. It is a biphasic current. There is a fast inward activation, then after that come a later, slow outward current. Either phase can be blocked by a drug (inward by TTX, outward with TEA), implying that they are two different currents. By playing with the chemistry, H&H found that the inward current is a sodium current flowing from outside to inside, the outward current is potassium ions flowing from inside to outside. The inward flow of sodium ions lasts such a short time because it inactivates itself. The cell body has synaptic and passive channels, and capacitance. The axon has an outward active potassium channel, an inward active sodium channels, a passive channel, and capacitance. Because current is conductance times driving force (i.e. difference in ion concentrations), H&H could figure out the conductance of the sodium and potassium ion channels. Injecting current with a voltage clamp creates a proportional conductance by opening ion channels in the axon. Sodium channels change conductance quicker than potassium ones. When you keep the voltage-clamp open, the sodium channel fades away (inactivates) after time. So the sodium channel is inactivating, but the potassium channel is not. H&H then faced the challenge of writing equations that modelled how these conductances grew and faded. The rising phase of the K-current is (1-exponent(-t))4, and its decay is exponent(-4). The potassium conductance in the membrane = gK = gbarKn4. n is a number between 0 and 1 that gets higher for higher voltages in the voltage clamp. n depends on time as well. It represents the proportion of K-ion channels in the membrane that are open at that moment. The power of four comes into it because for a potassium ion to cross the membrane, there must be four similar particles there. You can think of the membrane as having four gates in series. They must all be open for a potassium ion to pass thru to the outside. When you depolarize the membrane (i.e. put a positive charge in it), the gates start to open. Enough depolarization and all four are open. n can be thought of as the probability that a given gate is open. dn/dt = n (1 - n) - nn represents the rate parameters that change value depending on voltage and time. If alpha is big, gates move towards open; if beta is big, gates become closed. Sodium requires another variable to account for inactivating. gNa = m3hg, where h is the inactivating variable. dm / dt = m (1 - m) - mm dh / dt = h (1 - h) - hh A flow of sodium ions requires 3 gates rather than 4 to open. These are called m-gates. There is also a h-gate which closes slowly and inactivates the channel. The reason a threshold stimulus from the presynaptic neurons causes a spike is that it opens sodium channels. The positive current from the inputs opens gates to positive current from the sodium ions - this is what a spike is. The spike is self-limiting because the voltage it creates opens potassium channels, which carry positive charge out of the cell. This is why after a spike, the voltage goes below the resting -70; the potassium channels are open, making things more negative. After a spike, the h-gate has closed, and so another spike can't happen. This is the absolute refractory period. You also have sodium ions creating negative current at this time, contributing to the relative refractory period. Some cells might have a refractory period of 5 milliseconds, meaning they can fire 200 times a second, but 10ms is more normal.
NeurogenesisLearning about neurogenesis might help us develop therapies for neurodegenerative diseases. In 1985, Pasko Rakic from Yale said that adult brains had no neurons in them. In 1997, Elizabeth Gould from Princeton said that she saw neurogenesis in tree shrews, then in primates in 1998. Paper from 1999 showed by staining techniques Now with the two-photon microscope, you can see the development of new neurons over days. The more challenging a task given to a mouse, the more new cells are born The new cells are born as cells in a particular niche in the hippocampus, then shunted to a different part of the hippocampus where they sprout connexions. (This is mice we're talking about.) Neurogenesis happens in the olfactory bulb and the hippocampus in the mouse, not all brain-parts.
Computational neuroscienceEven a single neuron can be considered to compute Not just a neuron, but even a dendrite can execute some computations, as shown by recent work looking at neurons in the retina with new techniques. What problem needs to be solved by the organism? What mathematical techniques are needed to solve it? What hardware implements these algorithms? Different areas of the brain have different hardware and implement different algorithms to solve different problems. An example would be computing the distance to a cup I want to pick up, based on visual inputs. The brain computes from visual inputs what bits clump together as an object i.e. figure-ground separation. The brain has an algorithm telling us which parts of a face to point the peepers at. We can see this in eye-tracking experiments - people look at the eyes and the mouth more than would happen by random chance. The visual system also has an algorithm that spits out recognitions: this is a face, this is a house (https://www.youtube.com/watch?v=UmnXt8zQ_Lw) The visual system also has an algorithm that identifies motion: this is moving left, this is moving towards me, this is not moving Using the outputs of these computations, we can model our behavior. I know it's a car, and I know it's moving towards me, so I know to stop at the kerb. Hubel & Wiesel won the Nobel Prize in 1981 for experiments where they implanted microelectrodes in the neurons of a living cat. They found a cell that fired when and only when the cat saw a line, at a particular angle, moving in a particular direxion. When a vertical line moved, it fired a lot. When a horizontal line, it didn't fire at all. For lines that are pretty vertical, it fired quite a lot. One early theory about the neuron as a computational devicewas by McCulloch & Pitts in 1943, in a paper called 'A logical calculus of the ideas immanent in nervous activity'. This paper influenced computer science even more than it influenced neuroscience. It was inspired by the binary/digital nature of the neuron, and by the idea that synapses are either excitatory or inhibitory. Their theory was this: suppose there is a hypothetical neuron with 3 excitatory inputs, and 1 inhibitory one. The E inputs are each +1, the I input is -4. The threshold is 1. In this neuron, the neuron fires if E1, or E2, or E3 are active, and I is not active. Now you have a logical formalism describing the rule controlling the firing of the neuron. Neuron qua logical device. Using logical devices, you can build a complex computer that can compute anything. It's interesting that this idea from neuroscience influenced computing. And ideas from computing often influence neuroscience, and back and forth. Neurons are different from the hypothetical neuron in McCulloch & Pitts's model. Why? Because they are spread across quite a bit of space, whereas M&P modelled the thing as a point. What are the implications of this. Computational neuroscience is about creating mathematical models of brain activity. If we have a mathematical model of the thing, we understand that thing. This allows us to interpret results, and to predict results of future experiments. When we have a parsimonious mathematical model, it tells us which variables are worth paying attention to (e.g. conductance in the H&H model), and which can be ignored. A mathematical model also allows us to look at the thing as a functional element (e.g. as a computational component in McCulloch&Pitts's model).
Cable theory of dendrites by Wilfrid RallAims to create a mathematical model of how distant dendrites affect the output of a soma or axon. Rall in 1959-64 put forward a theory expanding on McCulloch & Pitts. He wanted to use a more realistic model of a neuron, taking into account the large extent of the dendritic tree. One thing this implies is that if you inject a current into the soma, most of it actually flows out the dendrites. Potential changes along the length of the dendrite; it attenuates. There is a time-lag between the synapse receiving the input, and the input reaching the soma. Synapses closer to the some will have a smaller lag. Model a dendrite as a series of cylinders. These have varying diameters and lengths and conductances. At some point on a cylinder, there is a dendritic spine and a synapse at it. If current is injected here, it won't just flow towards the soma; it will flow in both direxions. As it's flowing, some will leak out; it's attenuating coz of resistance. As well as the attentuation/leak, it will lose a whole lot of current when it reaches a fork in the road. Now, how do we describe mathematically this diminishing current? The current is proportional to the derivative of voltage and distance (dV/dx) Axial current that is lost becomes membrane current. (It leaks out thru the membranes.) The change in axial current = membrane current. In other words, axial current + membrane current = 0. At branching points, there is a leakiness. If a current of 30mV is injected at the distal end of a dendritic tree, 1mV might reach the soma. Because of how current spreads in all direxions (not just somaward), synapses affect other synapses near them on the dendritic tree. There "are" neighbourhoods of synapses on the dendritic tree, and these neighbourhoods compute. Rall's theory says that more time, or with more distance, makes the current diminish. Interestingly, when time is low, distant points on the cable are much less affected by a current, but for larger values of t, this is not so significant. Rall's theory allows us to look at a current reaching the some and guess how distant its origin is. Close synapses will be narrow (i.e. short-lived), and distal ones will be broader. The idea of neighbourhoods within the dendritic tree allows us to think of two kinds of computation: they compute to do with their dendritic tree, and they compute at the soma. Dendrites classify inputs.. Dendrites can compute the direction of motion (e.g. visual motion). They can localize sound. Rall's neurons allow more complicated computation than the neuron of McCulloch&Pitts. The M&P neuron has no subtlety with regard to location. With a bunch of excitors, inhibitors, excitors, inhibitors along a cable, there are a lot of IFs and THENs that can veto each other. Summation of inputs happens locally on the tree, and then the results of these also summate and enter the soma. Synapses are clustered on the tree, and that cluster has an output that is different than it would be without this cable interference. An example of the local sensitivity of the synapses: if activation sweeps thru excitatory synapses in a distal-to-proximal order, they arrive almost together and create a larger summated voltage at the some. (Consider the implications if this additional voltage makes it hit the threshold.) If they fire in the reverse (proximal-to-distal) order, the spike at the soma is broader. Because of this direxional sensitivity (i.e. the fact that proximal-to-distal is different from distal-to-proximal), consider a dendritic tree with sequential inputs from visual receptor neurons. If they sweep left-to-right it may fire, but not if they sweep right-to-left. This is how your brain tells what direxion things are moving. A neuron was recorded in a mouse's brain in vivo that responds only to a line of particular orientation. It was found that the input neurons respond to lines of various orientations. We don't yet know if the resultant firing at the neuron that was studied is simply summation of the inputs, or a more interesting dendritic computation of the inputs creating a response to its own specialty orientation. Retina is composed of several layers, starting with receptor cels, then bipolar cells. The ganglion cells output to the optic nerve. These ganglion cells have the direxional selectivity thing going on too. Reichard detector was an early theory to explain direxional sensitivity. It states that there is more inhibition in one direxion, more excitation in the other. An asymmetry. They are reconstructing the connectome of the retina from slices to figure this out. First find the orientation a cell is sensitive to, then reconstruct the synapses around it. Inhibitory amacrine cells inhibit these retinal ganglion cells. The connectomics supports the Reichard theory, because it shows that there are more inhibitory synapses on one side, so light sweeping from that side turns the cell off - direxional selectivity. The complexity and computation of the brain is not the emergent property of simple elements; the elements (i.e. neurons) are complex and capable of computing.
Mega projects to map the brainAllen Institute, Janelia farm, EU human brain project, and Obama's 'brain activity map' are 4 big, heavily-funded projects to map brains. Allen Institute is focusing on the visual system of the mouse. They already made an atlas of gene expression in the mouse's brain. Janelia farm is in Washington D.C. EU human brain project is the Blue Brain project. Based in Lausanne. Obama's brain activity map (BAM) aims to measure the spiking activity of neurons, millions of billions of neurons simultaneously. There are 560 known neurological diseases. There is a project called the 'diseasome' that aims to map them genetically and otherwise. Anatomical or activational screwiness causes disease. Part of the motivation of the Blue Brain Project was to make raw data (not just papers) accessible to scientists. The more controversial aspect of the project is simulation. The Human Brain Project emerged from this. Simulation-based research can teach us some things about the workings of the brain. Connectomics is controversial because it is very labour-intensive and some argue that less detailed modelling could serve the same purpose. In the neocortex of different mammals, just under the skull, we see similar things. There are about 30,000 cells per mm3. There are pyramidal cells in there. There are axonal inputs from distal regions. But the inputs aren't random; they can be organized into six layers. The layers have different cell types (remember there are multiple ways of categorizing cells) A column in the somatosensory cortex of the mouse looks like a barrel and is called the barrel cortex. This computes data coming from a particular whisker. You can see separate barrels in the brain, and these map topographically to the layout of the whiskers. Primarily, each barrel computes for its whisker, but then after that, the information spreads. In the cat's V1, there are whole columns that are sensitive to visual lines of a particular orientation. The Blue Brain Project is focused on cortical columns for now. By simulation-based research, it aims to take all we have learned about cell types, synapses, spiking activity, and conclude from that how outputs are formed. Remember that they have different firing patterns: stuttering, regular etc. The Blue Brain Project needs to figure out which cells connect to which, and also the firing properties of each cell type. We need a Hodgkin-Huxley equation for each cell type. The Blue Brain Project also uses the passive cable equations we covered (and the active cable equations we did not) to understand other things like dendritic computations. Idan Segev and others extended the Hodgkin-Huxley equations to be an even better fit to measurement. We also need mathematical rules for plasticity. We have spike-timing-dependent plasticity equations, but we need more plasticity equations. The IBM computer can compute 100,000 cells. Moore's Law says we'll be able to Blu Brain the whole human brain by 2023. In the Netherlands, there is a group analyzing human brains slice-by-slice when chunks o' brain are removed for surgical reasons. The Human Brain project comprises hundreds of institutions combining their efforts. It is medical, computational, physiological etc.