One of the many interesting questions neural engineering raises is, “where is the border?” We think of there being a common sense line between me and you and being different people, but this starts to get fuzzier when you bring high bandwidth, low latency brain-machine interfaces into the mix. If there’s a one bit-per-second direct connection between a neuron in your brain and a neuron in my brain, not a whole lot changes. But we can expect things to get weird when you start talking about pathways between hundreds of millions of cells on both sides.
I was thinking about this question while I was watching the Lee Sedol v AlphaGo matches in Seoul two weeks ago. Lee Sedol has no way to add cognitive power, but AlphaGo clearly gets better with additional GPUs.
As long as we’re talking about “AI,” we still consider this expanded machine as part of the same identity. We might draw those lines if what was being distributed was some kind of service-oriented architecture of microservices with clean interfaces, but in this case it looks like it’s farming out corners of the search space to workers for evaluation and merging the results, which feels enough like one brain to me.
This raises a question about how a team of people, maybe Lee Sedol and a few friends, would perform against AlphaGo if they had some way of coordinating amongst themselves and coming to a consensus about what move to make.
We form collections of humans (teams, companies) to solve problems beyond the reach of an individual all the time so being “distributed” is not a unique advantage of AI. There are many well known scaling problems associated with collections of humans — improvement is way sublinear — but from the graphs above the same is true of AlphaGo. It’s a different thing to functionally divide and conquer as it is to solve progressively harder computational problems, but the existence of wisdom of crowds means that there’s probably something interesting there.
I wonder about the relationship between how you split up “identity” and functional capabilities. Would you get better performance on a task by constructing an AI with lower bandwidth interfaces and islands of powerful processing capability? If you could directly link the brains of multiple humans with more efficient channels than vision and audition, the concept of identity would get fuzzy and things might be different but would they be measurably better in some way? How does the bandwidth and latency between two points, and the interfaces that implies, affect function?
As a mostly unrelated aside, every now and then I wonder why Demis Hassabis keeps publishing basic neuroscience papers — and mostly only those! — and then I remember that most of my most interesting questions about intelligence and deep learning have some unifying element with basic neuroscience, which has helpfully provided some model systems for us to study. Interesting times we live in.