There’s currently a widespread belief in society that the trouble with brain-machine interfacing is understanding the ‘neuronal code’: what the brain is saying and how it encodes information about its world. There are over one hundred billion neurons in a human brain talking amongst themselves in a hidden language our best neuroscientists are only beginning to understand. This is partly true. Yes, we’re beginning to understand the neural code; and yes, some of our best neuroscientists are involved with that effort; but no, that’s not the hard part. The problem is that from all of our experience so far, decoding neural signals just isn’t that difficult: almost as soon as the recording technology has been available, we’ve always found that there’s enough information encoded in the reachable neurons for there to be a usable correlation (or at least a correlation can be trained*) for classic machine learning methods to make sense of the noise with no special information about the brain needed over, say, guiding rockets in flight. Once we have the spike times captive we’ve never had trouble interpreting them. Georgopolous discovered the tuning curve in the 80s, literally without a computer.
Decoding the signals isn’t the hard part: getting the signals captive is. Current recording technology, put simply, sucks. Really, really badly. Current implants only pick up around 4 cells per electrode, last an average of 3 years before being encapsulated in fibrous tissue by the brain (hypothesized to be an immune response) and rendered useless. The entire art of BMI is predicated on the task of “sorting cells”–discerning neuronal units from the multiunit mess picked up by the electrode–something that not only must this be done by hand, but something that the best humans often can’t do well above chance. Noise and neurons happen to look and sound about identical in a lot of cases. Yes, at some level this doesn’t matter since noise will simply be uncorrelated — but at the very least, it completely throws off arguments about significance (are those 47 untuned things neurons or noise?). Microstimulation, the method of ‘writing’ information back to the brain through tiny electrical pulses meant to trigger action potentials, frequently cause seizures and is essentially tantamount to cooking the brain tissue immediately proximal to the electrodes.
Compared to the decoding work that’s being done–the ‘real’ neuroscience–this recording problem is rarely mentioned. Electrode technology is generally seen as a back room technician thing; at best, it’s something for application-agnostic biomedical engineers of more arcane subspecialities to publish is J. Neuroeng. or J. Neurosci. Meth. but that low-level tech-stuff just isn’t worth a Nature paper, clearly. This goes to the ‘best neuroscientist’ perception — decoding brain activity sounds hard. Not to discount decoding, it’s true that it is hard also, but the truth is that the recording problem is harder. Decoding neural activity in a meaningful way, if the recording problem was solved for you, will win a free trip to Sweden. The Great Neuroscientist who Understood the Brain will win the recognition, because a tissue engineer who didn’t come up with Fundamental Insights will never take the spotlight, though he’d probably come away with a brilliant technology. A vendor makes the recording systems and headstages, which the scientist buys to understand the brain. This is a problem. We need to realize that the block is almost entirely the recording technology we have to play with. That’s the bottleneck. Until this is resolved, we won’t have full BMI. When Kennedy developed his neurotrophic electrode capable of reading 60 neurons per electrode with little degradation over time, the breakthrough he reported was being able to tell whether a locked-in patient was either (a) internally ‘talking’ or (b) not internally ‘talking’. That straight-forward baby step (training a bayesian classifier on the activity of several cells in an area long known to be associated with speech) glossed over the fact that he’d talked 60 cells into growing up a glass tube. Ultimately, even this is still a far cry from the real information we care about, which is largely the connectivities in addition to just the individual action potentials. Unfortunately, it’s not at all obvious how to get any of this yet.
* One of the great secrets of neuroscience is that the brain’s innate (default?) mapping only matters to much. Cortex can be conditioned into behaving within pretty wide limits (though far from limitlessly). You can retrain M1 to encode just about any motor parameters you want, but you’ll probably only get so far trying to stim sensory data into it.
I’m actually not sure how much I still maintain my 2009 belief that interface technology was being neglected by mainstream labs. The advent of optogenetics and the attention Boyden and Deisseroth have received over it pretty well refute the idea that Real Scientists don’t work on technology. That said, interface technology does remain the hard problem of BMI, and is still a giant cliff face of pain in our way that we’ll have to address eventually.