Michael Levin EISM Intreview

Michael Levin is a Distinguished Professor in the Biology department at Tufts, where he holds the Vannevar Bush endowed Chair and serves as director of the Allen Discovery Center and the Tufts Center for Regenerative and Developmental Biology.

Levin holds dual B.S. degrees, in CS and in Biology, and a PhD from Harvard University. He did post-doctoral training at Harvard Medical School (1996-2000), where he uncovered a new bioelectric language by which cells coordinate their activity during embryogenesis. Levin and his team have successfully induced limb regeneration in tadpoles and hope to apply the same advances to limb regeneration in humans at some point in the not-too-distant future. 

In this conversation, Levin talks to us about the usefulness of his team’s conception of intelligence and the irrational “teleophobia” that dominates the discipline of biology today. Levin’s work on limb regeneration is a clarion call for all scientists to rethink the nature of time, cause, and agency.

* The following text has been selected from the full interview, which may be viewed below.

Dr. Michael Levin

You say that when a person makes a judgment about intelligence, it’s a measure of their own IQ. Can you elaborate on this and connect it to your work in biology?

So I want to be clear that intelligence is a really, as you well know, is a very complex issue. I’m not going to claim that I have the only appropriate definition or the best answer. I’m just going to give you my thoughts on it and the framework that I found useful in my work. I think we’ve, you know, can fully accept that there are other people with other useful definitions and so on.

So for me, some work that has to be done by any theory of intelligence is that it has to be able to help us directly compare and interact with what we now call diverse intelligences. So I’m not interested in theories that apply just to mammals or to mammals and birds or that break down when we have to think about octopus or insects or other things. I want a framework for intelligence that is going to handle all possible agents. This means not only the things we see in the phylogenetic tree here on earth, but novel synthetic biology constructs, artificial intelligences that we may build, either in hardware or software, potential exobiological agents, weird beings at other scales, including individual cells, subcellular molecular networks, uh, enormous things like social structures, and basically and perhaps the evolutionary process itself.

So I see all of these things as empirical claims and, in fact, as all empirical claims, they’re provisional. They’re subject to um you know, updating by other people who are sharper than we may be in noticing what it is that a certain system is doing. So that’s what I mean: When you make an IQ test, you’re making a claim of this is the way I’ve discovered to interact and to interpret the system. Show me if there’s a better way or not but that’s, you know, that’s the current, that’s my current estimate. That’s what I mean by intelligence.

So what is your definition of intelligence?

My definition of intelligence is basically what William James said, maybe a little bit generalized. I view intelligence as competency in navigating various spaces. These spaces can be all kinds of unconventional things. Not just the three-dimensional space that we’re used to but also things like transcriptional spaces, meaning the space of all possible gene expression levels, anatomical amorphous spaces, so basically the space of shape deformations, and shapes, and biological shapes, um physiological spaces. It doesn’t matter it can be many and there may be other spaces that I don’t even know how to deal with – linguistic spaces and other things, it’s competency navigating those spaces.

Michael Levin Quote about Goals in Biology

Now what does that mean? Let’s unpack that. It means two things. I think fundamental, the one thing that is fundamental to all agents, to in order to call somebody something an agent, to be somewhere on this continuum of agency, I think what we’re really talking about is different degrees of competency to pursue goals. There is some region of that space that you prefer to other regions of that space. There are certain states that you like better than other states, and you have some amount of capacity to get there. Now that may be extremely primitive, meaning that you might be a bacteria and all you’re doing is run and tumble and you really, you know, that’s all you know about your space. Or you might be very sophisticated like a like a mammal or a bird that has a memory of the space and can sort of navigate it with forward planning and things like that.

But in between lie a huge diversity of different navigation policies for navigating that space. For example, how good are you at avoiding local minima? Do you have patience to back away from a sort of direct line to your goal if, you know, that if I go around temporarily I’ll go further, but in the end I’ll do a better job. Do you have some, do you have patience? Are you able to remember where you’ve been and to represent where you’re going? There’s a huge diversity, so William James, I think had it right on the money, as he did with most things, which is that he said that it’s the ability to get to the same goal by different means. So what kind of capacity do you have to achieve your same goal when, uh, you’ve being perturbed, you’ve been pushed to a different region of the space, you have been altered in some way, your environment has been altered – all kinds of things are happening. Can you still get to your goal? Or are you basically just a sort of hardwired automaton that all it can do is follow the exact same steps every single time?

So I like that definition. It’s a functional definition. It’s empirically tractable. It helps us do new work in and it prevents endless armchair arguments and philosophical arguments. We make it an empirical question.

As a biologist you face quite a bit of resistance in talking about goals. Where do you think this resistance comes from?

I call it teleophobia. There is this incredible resistance to any kind of models or frameworks that have to do with goals, and I think the reason is this. In the olden days, where we basically only had two, there were only two kinds of objects in the world, as far as we knew. There was dumb matter, which just sort of does what it does, and then there were humans that had goals, right? And that’s the sort of, that’s it. That was the dichotomy, and that’s what we thought about.

Now if you live in that world, absolutely, when you do science, if those are your only two choices, when you do science, you want to err on the side of no goals because, otherwise, you end up with, you know, completely scientifically intractable kinds of pseudo problems where you are attributing human level goals to. Most of what we see is just useless so people err on the side of no goals at all.

Now the nice thing, and a lot of people seemingly haven’t caught up to this, is that ever since the 1940s and probably well before that, we’ve had a perfectly good science of goals. That is, not magical. Okay, the age of having either no goals or human level cognition, those are your only options. That age is gone. Since the 40s, we’ve had cybernetics, we’ve had control theory, we’ve had computer science. We now know perfectly well that you can have mechanical, non-magical, naturalistic systems that have goals. And it’s not scary.

How would you characterize the current climate regarding teleology in biology?

I want to be clear that I’m not the first person to try to rehabilitate teleology in biology. There have been many very deep thinkers who have made this effort, from time to time. However, I think that the field as a whole, by a very large amount, falls into two camps. There are people looking sort of downwards towards molecular biology and assuming that the micro details of chemistry are going to do all the work that we need and that there should be no goal talk whatsoever. That’s a massive sort of slice of the of the pie. Then there are the people who look at it in reverse and they tend to be… more interested in psychology and behavioral science and maybe social issues who worry that that kind of approach is going to drain humans of moral status, and that basically everything will then go out the window. They’re worried about the coming AIs and extending personhood to things that they don’t think are persons.

And then there’s a there’s a kind of a razor thin community in between which are people who work on basal cognition. And, again, I’m not the only one there. There are a number of really excellent folks working in this area.

The biggest problem with all of this is this that that most of these discussions have been taking place in a philosophical vacuum. So a lot of the people who critique this topic have never read a single thing on teleology. They just feel that goals are bad. And often the people who are into this topic debate endlessly in philosophy and it never sort of impacts the other side of the community because what they want to see is, all right, what does this do for me? Why do I need to pay attention to this? What’s the payoff going to be? So what we’ve done is work very very hard at trying to be extremely clear as to the idea that these concepts are not philosophy. They cannot be decided from an armchair. They literally make a difference in how you do experiments and what experiments you do. So, to me, the final arbiter of all of this is simply uh well, it’s experiment.

Dr Michael Levin on Teleophobia

Is the so-called principle of least action in physics relevant for your work?

Yes, absolutely, least action principles are a massively important inspiration for me because it, and I’m no physicist, and for a much better story about physics you can talk to Carl Fristen and Christopher Fields and then people I work with, they’re sort of much much sharper on this…

Dr Michael Levin EISM Regeneration

But, for me, it bears on the following question: You know, as I have rolled out this framework of a continuum of cognition, uh the question then arises, is there a zero on this? So are there things that have exactly zero cognition? And because of these kind of least action principles, I think no. I’m not 100 convinced, but if but if I had to put money down right now I would say I think that in our in our universe I don’t believe there’s a zero cognition because if we ask ourselves, what are the absolutely minimal requirements for having something that you would call being an agent? What are the absolutely minimal requirements?

Well the minimal requirements, to me, are first of all that you are able to do some sort of goal-directed activity, as simple as it may be, but you’re able to do some sort of goal-directed activity, and two, there should be some degree of indeterminacy in your actions that are not precisely explained by all the local forces immediately acting on you now. Okay and if you think about it, that way well single particles already have it. We already have the least action stuff and we have the quantum indeterminacy.