March 08, 2008
Can a thinking, remembering, decision-making, biologically accurate brain be built from a supercomputer?
- The behavior of the computer replicates, with shocking precision, the cellular events unfolding inside a mind. "This is the first model of the brain that has been built from the bottom-up," says Henry Markram, a neuroscientist at Ecole Polytechnique Fédérale de Lausanne (EPFL) and the director of the Blue Brain project. "There are lots of models out there, but this is the only one that is totally biologically accurate. We began with the most basic facts about the brain and just worked from there."
Also, Scientists Develop Artificial Neural Networks - which, typically for humans, will be used to make better war technologies. That's where the money is, innit? Assholes.
-
I dunno. As a wet lab molecular biologist, I have to admit that I’m extremely skeptical. I see a lot of this sort of work around my facility from the engineering and CS crowd, and a lot of it is pretty useless for now. In fairness, I readily admit that accurate models of biological processes are the holy grail of research. I just think that we’re a long, long way from anything of value in this area. The problem with the CS folks is that they simply come from a different background and completely miss the complexity of the system they’re dealing with. The CS skillset has its own set of challenges and folks tend to get wrapped up solving these. The result is that they end up with models based on many false assumptions. In other words, the model is highly precise and completely inaccurate. They have no tie to reality as exists in the real world because they simply don’t have enough inputs. So. The output of these complex models usually have so many errors that they’re not useful for anything of consequence. When I talk of ‘complex biological models’ I think of relatively simple tasks like say protein folding. People have been modeling this stuff for decades and the output still isn’t terribly useful. Neuroscience exists on a plane of infinitely greater complexity. Without really understanding the root cause and effect these models are based on correlations. As any good Wall Street quant will tell you of late, correlations tend to break down at inopportune times when they’re based on a third party unknown. When you build a model based on thousands of correlations you end up with something that fails in completely unpredictable ways. I suppose that the retort to this argument is that this logic would lead *all* models starting with the top quark – completely reductionist. This is a valid criticism. That being said, the proof is in the output. I see models every day that are presented by some bright eager assistant professor. They’re made, presented, published, and then forgotten. Why? Because they’re not reliable enough to base any sort of future work on. Anyway, still worth trying I suppose. On the other hand, as one of the struggling wet lab researchers getting dosed daily with harmful chemicals I will admit to some frustration with the publicity that this research often elicits. Shiny young programmers and engineers drop in, throw up fancy 3D models, pull down gobs of funding and praise, and pump out buckets of papers that ultimately produce nothing. Meanwhile their wet lab counterparts slave away in the salt mines for a paper every few years in obscure journals. Not very sexy, but the basis for tangible progress. So as you might imagine there is a good deal of tension between these communities. To sum up: this sort of research has validity. It is important. Hopefully in a few generations it will lead to amazing advances. In the short term, however, I’m not counting on anything of use. This is why when ‘important’ new models are presented the response amongst the wet lab crowd is usually a shrug and an eye roll. “Pretty diagrams. I bet you’ll get lots of press coverage and grant money for that. What can I do with it?”
-
This is some pretty ambitious and impressive work. Per the article, these people validate their computational model by operating their own 'wet lab' (albeit staffed by robots), so it would seem they have taken FAQ's criticisms to heart. Meanwhile their wet lab counterparts slave away in the salt mines for a paper every few years in obscure journals. That's a surprising comment — obscure journals? Nature and Science devote many more pages to experimental life sciences than they do to computational anything.
-
I have to admit I'm pretty skeptical about this endeavor myself. Reminds me of when AI started with a top-down approach, then ran into such realizations as the fact that as humans we didn't even really understand how our own vision system works, let alone many, many other facets of brain/mind function. Now they're trying a bottom up approach, but I suspect that without some radical insights they'll just end up with a perfect simulation that doesn't actual 'do' anything. It will just sit there and simulate cells. It won't simulate an actual mind, or even the components that could possibly make up a mind. From my understanding, at this point in neuroscience, most of the brain is a series of 'black boxes' where we have an idea of what goes in, and we know what comes out, but we don't know what goes on inside each box. This project seems like an attempt to perfect a simulation of the cardboard the boxes are made from, which doesn't truly get you any closer to understanding what's happening inside the boxes, or how you might simulate that. Plus, I'm all for Moore's Law, but I'm not at all sure some of the hardware hurdles will be overcome on the ten year time line he talks about. Hope it all gets somewhere, but this reminds me of the kind of grand project that starts sputtering in a few years, then the scientists who slaved in the trenches on it bail out and go off to be much more successful on much less ambitious versions of essentially the same work, applying what they learned in the first attempt.
-
As this documentary clearly shows, not only is this possible, but it's already being done! Stop all the downloadin'
-
Ough, Yeah, of course there are plenty of papers from the wet lab world in Science and Nature. And granted, there are a lot of other fine journals that don't go by these names. I think in general I was simply trying to convey to those outside the field the source of the friction that exists. Sure, you can point to data and reasons the model isn’t relevant. But personally I think that in a lot of ways the criticism simply comes down to emotion. To dig a little deeper on the subject, the situation is this: most topics are not destined to be flashy from the start. This is, of course, by necessity. Not everyone can get a slot in Science ever week. So most folks will toil away on research that will end up somewhere else - no matter how successful. Perhaps JBC, maybe Nucleic Acids Research. (Both are fine journals, by the way.) And of course this is often times no different for the modeling folks. In fact, due to bias that I was originally commenting on, many won’t even see an Impact Factor >5. (If you believe in the Impact Factor as a proxy for quality. But that’s a different post.) But, on the other hand, I do know of some groups (big names in their field to be sure) that seem to churn out the articles in “top” journals (IF>20?). The odds of being one of these groups, however, seems to be a little higher than average. But maybe that’s just an artifact of the newness of bioinformatics as a sexy new field. Eventually every hot new area gets inundated with talent and the easy pickings evaporate. But anyway, even if the models *were* more useful and even if it wasn’t the hot field of the past decade there’d still be frustration from those in the wet lab. The fact that the model makers spend their day drinking coffee in pleasant, dimly lit rooms, while the biochemists spend their days in a damp coldrooms will always create tension. Sour grapes, I know. But that’s life. I throw this out there simply as a way of shining some light on the emotion that is often behind the science. I think this sort of thing is usually lost to the general public. Being human, emotion plays a surprisingly large role in a lot of discoveries. An example: I saw a James Watson lecture some time ago. He basically stated that Rosalind Franklin was left off of the paper announcing the structure of DNA because she wasn’t likable. Note that Watson and Crick are now enshrined by history as two of the most important scientists of all time. While the treatment of Franklin may be more of a statement on Watson as a person it’s still another datapoint on emotion in science. Come to think of it, this is a good analogy on the overall topic too. The person making the data got no recognition and perhaps cancer from work related radiation. The model builders used her data to become rich and famous. (Joking there. Sort of.)
-
See also: Lobsters.
-
I'd like to see the zombie who could eat THAT brain!
-
The result is that they end up with models based on many false assumptions. Nailed.