Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this lecture, Prof. Jeff Gore introduces oscillatory genetic networks. He asks why oscillations are useful, and why might we want to design an oscillator. Central to the lecture is a Nature article: A Synthetic Oscillatory Network of Transcriptional Regulators.
Instructor: Prof. Jeff Gore
Oscillatory Genetic Networks
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Today, what we're going to do is, first, introduce this idea of oscillations. It might be useful. A fair amount of the day will be spent discussing this paper by Michael Elowitz and Stan Leibler that you read over the last few days, which was the first, kind of, experimental demonstration that you could take these random components, put them together, and generate oscillatory gene networks.
And finally, it's likely we're going to run out of time around here. But if we have time, we'll talk about other oscillator designs. In particular, these relaxation oscillators that are both robust and tunable. It's likely we're going to discuss this on Tuesday. All right, so I want to start by just thinking about other oscillator designs.
But before we get into that, it's worth just asking a question. Why is it that we might want to design an oscillator? What do we like about oscillations? Does anybody like oscillations? And if so, why? Yes.
AUDIENCE: You can make clocks. And clocks are really--
PROFESSOR: Perfect. Yes, all right. So two part answer. You can make clocks. And clocks are useful. All right. OK, so this is a fine statement. So oscillators are, kind of, the basis for time keeping.
And indeed, classic ideas of clocks, like a pendulum clock. The idea is that you have this thing. It's going back and forth. And each time that it goes, it let allows some winding mechanism to move. And that's what the clock is based.
And even modern clocks are based on some sort of oscillatory dynamic. It might be a very high frequency. But in any case, the basic idea of oscillations as a mechanism for time keeping is why we really care about it.
Of course, just from a dynamical systems perspective, we also like oscillations because they're interesting from a dynamical standpoint. And therefore, we'd like to know how we might be able to make them. Can anybody offer an example of an oscillator in a G network in real life? Yes.
AUDIENCE: Circadian.
PROFESSOR: The circadian oscillator. That's right. So the idea there is that there's a G network within many organizations that actually keeps track of the daily cycle and, indeed, is entrained by the daily cycle. So of course, the day, night cycle. That's an oscillator. It's on its own. And it goes without us, as well.
But it's often useful for organisms to be able to keep track of where in the course the day it might be. And the amount of light that the organism is getting at this particular moment might not be a faithful indicator of how much light there will be available in an hour because it could just be that there's a cloud crossing in front of the sun.
And you don't want-- as an organism-- to think that it's night. And then, you shut down all that machinery because, after that cloud passes, you want to be able to get going again. So it's often useful for an organism to know where in the morning, night, evening cycle one is.
And we will not be talking too much about the circadian oscillators in this class. Although, I would say to the degree of your interest in oscillations, I strongly encourage you to look up that literature because it's really beautiful. In particular, in some of these oscillators, it's been demonstrating you can get the oscillations in vitro. I.e, outside of the cell.
Even in the absence of any gene expression, in some cases, you can still get oscillations of just those protein components in a test tube. This was quite a shocking discovery when it was first published. But we want to start out with some simpler ones.
In particular, I want to start by thinking about auto repression. So if you have an auto regulatory loop where some gene is repressing itself, the question is does this thing oscillate. And indeed, it's reasonable that it might because we can construct a verbal argument.
Starts out high. Then, it should repress itself so you get less new x being made. So the concentration falls. So maybe I'll give you a plot to add to it. Concentration of x is a function of time.
You can imagine just starting somewhere high. That means it's a repressing expression. So it's going to fall. But then, once it falls too much, then all of a sudden, OK, well we're not repressing ourselves anymore.
So maybe then we get more expression. More of this x is being made. So it should come back up. And then, now we're back where we started. So this is a totally reasonable statement. Yes?
AUDIENCE: [INAUDIBLE]?
PROFESSOR: Well I don't know. I mean, I didn't introduce any damping in here. The amplitude is the same everywhere.
AUDIENCE: So you're saying that you could actually have something--
PROFESSOR: Well I guess what I'm really trying to say is that just because you can construct a verbal argument that something happens does not mean that a particular equation is going to do that. Part of the value of equations is that they force you to be explicit about all the assumptions that you're making. And then what you're going to do is you're going to ask, well, a given equation is a mathematical manifestation of the assumptions you're making.
And then, you're going to ask does that oscillate. Yes/no? And then you're going to say, OK, well what would we need to change in order to introduce oscillations? And I'll just-- OK. So this is definitely an oscillation. The question is, should you find this argument I just gave you convincing? And what I'm, I guess, about to say is that you shouldn't.
But then, we need to be clear about what's going on and why. And just because you can make a verbal argument for something doesn't mean that it actually exist. I mean, that's a guide to how you might want to formalize your thinking.
And in particular, the simplest way to think about oscillations that might be induced in this situation would be to just say, all right, well the simplest model we have for an auto regulatory loop that's negative is we say, OK, well there's some alpha 1 plus protein and minus p. So this is, kind of, the simplest equation you can write that captures this idea that this protein p is negatively regulating itself in a cooperative fashion maybe.
Now it's already in a non-dimensionalize version. Right? And what you can see is that, within this realm, there are only two things that can possibly be changing. There's how cooperative that repression is-- n-- and then, the strength of the expression in the absence of repression.
And as we discussed on Tuesday, alpha is capturing all these dynamics of the actual strength of expression together with the lifetime of the protein together with the binding. You know, the binding affinity k. So all those things get wrapped up in this a or alpha rather.
All right, so this is, indeed, the simplest model you can write down to describe such a negative auto regulatory loop. Now the question is now that we've done this, we want to know does this thing oscillate. And even without analyzing this equation, there's something that's very strong, which you can say.
So in theory we're going to ask is it possible for this thing to oscillate. All right. Possible. Your oscillations, we'll say oscillations possible. And this time, referring to mathematically possible. So maybe this thing does oscillate. Maybe it doesn't. But in particular, without analyzing it, is there anything that you can say without analyzing it?
We're just going to say is it possible. Yes or no? If you say no, you have to be prepared to give an argument for why this thing is not allowed to oscillate. I'm talking about this equation. Do you don't you understand the question that I'm trying to ask?
And we haven't analyzed this thing yet. But the question is, even before analyzing it, can we say anything about whether it's mathematically allowed to oscillate? I'll give you 10 seconds to think about it. And if you say no, you get to tell me why.
All right, ready? Three, two, one. All right, so we got a smattering of things. So I think this is not, obviously, a priori. But it turns out that it's not actually. It's just mathematically impossible for this hing to oscillate. And can somebody say why that might be?
AUDIENCE: Because it might be you could only have one value of p dot?
PROFESSOR: Perfect OK. So for a given value of p, there's only some value of p dot that you can have. And in a particular-- so p here is like a concentration of x. So I'm going to pick some value, randomly, here of p.
And what you're pointing out is this is a differential equation in which if you give me or I give you the p, you can give me p dot. And there's a single value p dot for each p. And in this oscillatory scheme, is that statement true?
No. What you can see is that, over here, this is x slash p concentration of x. We're using p here because we're about to start talking about mRNA So I want to keep the notation consistent.
What you see is that the derivative here is negative. The derivative here is positive. Negative, positive. So any oscillation that you're going to be able to imagine is going to have multiple values for the derivative as a function of that value just because you have to come back and forth. You have to cross that point multiple times.
So what this is saying is that since this is a differential equation-- and it's actually important that it's a differential equation rather than a difference equation where you have discrete values. But given that this is a differential equation where time is taking little, little, little steps and you have a single variable, it just can't oscillate.
So for example, if you're talking about the oscillations the harmonic oscillator the important thing there is a you have both the position in the velocity see these two dynamical variables that are interacting in some way because you have momentum, in that case, that allows for the oscillations in the case of a mass on a spring, for example. Question.
AUDIENCE: I'm still not understanding. So the value of p can not oscillate?
PROFESSOR: Right. So we're saying is that, right, p simply cannot oscillate in this situation where we have a differential equation describing p with-- if we just have p dot as a function of p and we don't have a second order. A p double dot, for example. So if we just have a single derivative with respect to time and some function of p over here, what that means is that, if p is specified, then p dot is specified.
And that's inconsistent with any sort of oscillation because any oscillation's going to require that, at this is given value of p-- this concentration of p-- in this case, the concentration's going down. Here, it's going up. So here, this is-- from that standpoint-- a multi valued function. OK? And other questions about this statement?
Even if I just written down some other function of p over here, this statement would still be true. And it's valuable to be able to have some intuition about what are the essential ingredients to get this sort of oscillation. And for simple harmonic motion, right there we have the second derivative, first derivative, and that's what allows oscillations there.
OK, so we can, maybe, write down a more complicated model of a negative auto regulation. And then, try to ask the same thing. Might this new model oscillate? And this looks a little bit more complicated. But we just have to be a little bit careful.
All right, so this is, again, negative auto-regulation. What we're going to do is we're going to explicitly think about the concentration of the mRNA. OK. And that's just because when a gene is initially transcribed, it first makes mRNA. And then, the mRNA is translated into protein. Right?
So what we can do is we can write down something that looks like this. M dot derivative of m with respect to time. It's going to be-- all right, so this is the concentration of mRNA. And p is the concentration of protein. OK?
All right, and what you can see is that the protein is now repressing expression of the mRNA. mRNA is being degraded. But then, down here, this is a little bit funny. But what you can see is that, if you have more mRNA, then that's going to lead to the production of protein. Yet, we also have a degradation term for the protein. Yes?
AUDIENCE: Why are we multiplying the degradation rate of the protein times some beta, as well?
PROFESSOR: That's a good question. OK, you're wondering why we've pulled out this beta. In particular-- right. OK, perfect. OK, yeah. This is very important. And actually, this gets in-- once again-- to this question of these non-dimensional versions of equations. Mathematically, simple. Biologically, very complicated. Well, first of all, what is that we've used as our unit of time in these equations?
AUDIENCE: The life of mRNA.
PROFESSOR: Right. So it's based on the lifetime of the mRNA because we can see that there's nothing sitting in front of this m. And if we want to, then, allow for a difference in the lifetime mRNA and protein, then we have to introduce some other thing, which we're calling beta.
So beta is the ratio of-- well which one's more stable? mRNA or protein, often, typically?
AUDIENCE: Protein.
PROFESSOR: Proteins are, typically, more stable. So does that mean that beta should be larger or smaller than 1? OK, I'm going to let you guys think about this just make sure we're all-- OK, so the question is beta, A, greater than 1? Typically, much greater. Or is it, B, much less than 1, given what we just said?
All right, you think about it for 10 seconds. All right. Are you ready? Three, two, one. All right, so most people are saying B. So indeed, beta should be much less than 1. And that's because beta is the ratio of the lifetime.
So you can see, if beta gets larger, that increases the degradation rate of the protein. What do I want to say? So beta is the ratio of the lifetime in the mRNA through the lifetime of the protein. Yes?
AUDIENCE: So I get why we have to--
PROFESSOR: Yeah. No, I understand. No, I understand you. I'm getting to your question. First, we have to make since of this because the next thing is actually even weirder. But I just want to be clear that beta is defined as the lifetime of mRNA over the lifetime of the protein.
What's interesting is, actually, there's a typo or mistake in the elements paper, actually. So if you look at figure 1B or so-- yeah, so figure 1B, actually. It says that beta is the protein lifetime divided by the mRNA lifetime. So you can correct that, if you like.
So beta's is the mRNA divided by the lifetime of the protein. OK, so I think that we understand why that term is there. But the weird thing is that we're doing p minus m over here. Right? And it feels, somehow, that that can't be possible. You know, that it shouldn't be beta times m over here because it feels like it's under determined. Right?
OK. So it's possible I just screwed up. But does anybody want to defend my equation here? How might it be possible that this makes any sense that you can just have the one beta here that you pull out, and it's just p minus m over here?
AUDIENCE: I think it's an assumption of the model where they choose the lifetime of the protein and the mRNA to be similar.
PROFESSOR: Well no because, actually, we have this term beta, which is the lifetime of mRNA divided by lifetime of protein. So we haven't assumed anything about this beta. It could be, in principle, larger than one. Smaller, actually. So it's true that given typical facts about life in the cell, it's true that you expect beta be much less than 1. But we haven't made any assumption.
Beta is just there. It could be anything. Right? So yeah, it's possible we've made some other assumption. But what is going on. Yes?
AUDIENCE: Is it the concentration is scaled by the amount of necessary--
PROFESSOR: Yes, that's right because, remember, you can only choose one unit for time. And we've already chosen that to get this to be just minus m here. But you get to choose what's the unit of concentration for, both, mRNA and for protein. Can somebody remind us what the unit of concentration is for protein?
AUDIENCE: The dissociation constant of the protein to the--
PROFESSOR: That's right. So it's this dissociation constant. And more generally, it's the protein concentration, which you get half maximal repression. And depending on the detailed models, it could be more complicated. But in this phenomenological realm, if p is equal to 1, you get half repression. And that's our definition for what p equal to 1 means. So we've rescaled out that k.
So what we've really done is that there's some unit for the concentration of mRNA that we were free to choose. And it was chosen so that you could just say p minus m. But what that means is that it requires a genius to figure out what m equal to 1 means, right? It doesn't quite require a genius. But what do you guys think it's going to depend on? Yes?
AUDIENCE: It's going to depend on this ratio of lifetimes, as well.
PROFESSOR: Yes, right. So beta is going to appear in there. So I'll give you a hint, there are three things that determine it.
AUDIENCE: Transcription, or the speed of transcription.
PROFESSOR: Translation, yes. So the translation efficiency. So each mRNA, it's going to lead to some rate of protein synthesis. So yeah, the translation rate or efficiency is going to enter.
There aren't that many other things it could be. But yeah, I mean, this is tricky. And it's OK if you can't just figure it out here because this, I think, is pretty subtle. It turns out it also depends on that k parameter because there's some sense that-- m equal to 1-- what it's saying is that that's the amount of mRNA that you need so that, if the protein concentration where 1, you would not get any change in the protein concentration.
And given that now I had to invoke p in there and p is scaled by k, so then k also ends up being relevant for this mRNA. So you can, if you'd like, go ahead and start with a original, reasonable set of equations. And then, get back to this.
But I think, once again, this just highlights that these non-dimensional versions of the equations are great. But you have to be careful. You don't know what means what. All right? Are there any questions about what we've said so far?
OK. Now what we've done is we have now a protein concentration. We have mRNA concentration. And what I'm going to ask for now is, for these sets of equations, is it mathematically possible that they could, maybe, oscillate? Yes. I mean, we're going to find that the answer is that these actually don't oscillate. But have to actually do the calculation if you want to determine that.
You can't just say that it's impossible based on the same argument here. And that's because, if you think about this in the case of there's some mRNA concentration. Some protein concentration. What we want to know is do things oscillate in this space. And they could.
I mean, I could certainly draw a curve. It ends up not being true for these particular sets of equations. But you can't a priori, kind of, dismiss the possibility. Yes?
AUDIENCE: That's like a differential equation. But if you write down the stochastic model of that, would that--
PROFESSOR: OK, this is a very good question. So this is the differential equation format of this and that we're assuming that there are no stochastic fluctuations. And indeed, there is a large area of excitement, recently, that is trying to understand cases in which you can have, so-called, noise induced oscillations.
So you can have cases that the deterministic equations do not oscillate. But if you do the full stochastic treatment, then that could oscillate. In particular, if you do a master equation type formalism. And actually, I don't know, for this particular equations. Yeah, I don't know for this one.
But towards the end of the semester, we will be talking about explicit models in which, predator prey systems, in which the differential equation format doesn't oscillate. But then, if you do the master equation stochastic treatment, then it does oscillate. Yeah, so we will be talking about this in other contexts. But I don't know the answer for this model.
All right, so let's go and, maybe, try to analyze this a little bit. And this is useful to do, partly because some of the calculations are going to be very similar to what we're about to do next, which is look at stability analysis of a repressilator kind of system. All right.
So this thing here is some function f of m and p. And this guy here is indeed, again, some other function g of m and p. And we're going to be taking derivatives of these functions around the fixed point. And maybe I will also say there's going to be some stable point. We should just calculate what it is.
I'm sorry I'm making this go up and down. Don't get dizzy. So first of all, it's always good to know whether there are fixed points in any sort of equations that you ever look at. So let's go ahead and see that.
First of all, is m equal to 0, p equal to 0? Is that a fixed point in the system? No. Right? So if m and p are 0, then this is a fixed point. But that one's not because we get expression of the mRNA in the absence of the protein. So the origin is not a fixed point.
Now to figure out the fixed points, we just set these things equal to 0. So if m dot is equal to 0, we have 0. That's alpha 1 plus p to the n minus m. Again, 0 is this.
So what you can see is that, at equilibrium, we have a condition here where m is equal to p. So from this, we get m equilibrium is equal to p equilibrium. So m equilibrium over here has to be equal to p equilibrium, we just said. And that's equal to this guy here. It's alpha 1 plus p equilibrium to the n.
All right. And the condition for this equilibrium is then something that looks like this. Now this is maybe not so intuitive. But alpha is this non dimensional version of the strength of expression. And what this is saying is that, broadly, it's not obvious how to solve this explicitly.
But as the strength of expression goes up, the equilibrium here-- and I'm saying equilibrium. And that's, maybe, a little bit dangerous. We might even want to just call it-- it's a fixed point in concentration, so it doesn't have to be stable. So if we don't want to bias our thinking, different people argue about whether equilibrium should be a stable or require a stable.
We could just call it some p 0 if that makes you less likely to bias our thinking in terms of whether this concentration should be a stable or unstable fixed point. But for example, if we have that, in these units, if alpha is around 10, n might 2. Then, this thing gives us something. It's in the range of a couple or 2, 3.
I mean, you can calculate what it should be. 2, 4, maybe even exactly 2. Did that-- yeah. All right, so yes. I'm just giving an example. If alpha were 10, then this equilibrium concentration or this fixed point concentration would be 2 if n were equal to 2 to give you, kind of, some sense of the numbers. And this is 2 in units of that binding affinity k, right.
Now the question is, well, what does this mean? Why did we do this? Why do we care at all about the properties of that fix point? OK, so this might be some p 0. And this is, again, m 0 is equal to p 0 in these units. So there's some fixed point somewhere in the middle there.
Now it turns out that the stability of that fixed point is very important in determining whether there are oscillations or not. Now the question of the generality or what can you say that's universally true about when you get oscillations and when you don't, this is, in general, a very hard mathematical problem, particularly in higher numbers of dimensions.
But for two dimensions, there's a very nice statement that you can make based on the Poincare-Bendixson criterion. I cannot remember how to spell that. I'm probably mispronouncing it, as well. So Poincare-Bendixson, what they showed is that if, in two dimensions, you can draw some box here such that all of the trajectories are, kind of, coming in.
And indeed, in this case, they do come in because the trajectories aren't going to cross 0. If you have some mRNA, then you're going to start making protein. If you have just protein, no mRNA, you're going to start making some mRNA. And we know that trajectories have to come in from out here because if the concentration of mRNA and the concentration of protein are very large then, eventually, the degradation is going to start pulling things in.
So if you come out far enough, eventually, you're going to get trajectories coming in. So now we have there is some domain where all the trajectories are going to come in. Now you can imagine that, somehow, the stability of this thing is very important because in two dimensions here when you have a differential equation, trajectories cannot cross each other.
So I'm not allowed in any sort of space like this to do something that looks like this because this would require that, at some concentration of m and p, I have different values for m dot and p dot. So it's similar to this argument we made for one dimension. But it's just generalized to two dimensions.
So we're not allowed to cross trajectories. Well if you have a differential equation in any dimensions, that's true. But the thing is that this constraint is a very strong constraint in two dimensions. Whereas, in three dimensions, everything kind of goes out the window because in the three dimensions, you have another axis here.
And then, these lines can do all sorts of crazy things. And that's actually, basically, why you need three dimensions in order to get chaos in differential equations because this thing about the absence of crossing is just such a strong constraint in two dimensions. Other questions about what I'm saying right now? I'm a little bit worried that I'm--
All right, so the trajectories are not allowed to cross. And that's really saying something very strong because we know that, here, trajectories are going to come out of the axis. And mRNA, we don't know which direction they're going to come. But let's figure out, if it were to oscillate, would the trajectories be going clockwise or counterclockwise?
And actually, there's going to be some sense of the trajectories even in the absence of oscillations. But broadly, is there kind of a counterclockwise or clockwise kind of motion to the trajectories? Counterclockwise, right? And that's because mRNA leads to protein.
So things are going to go like this. And the question is is it going to oscillate. And in two dimensions, actually-- Poincare-Bendixson-- what they say is that, if there's just one fixed point here, then the question of whether it oscillates is the same as the question of whether this is stable. So if it's stable, then there's no oscillations.
If it's unstable, than there are. We'll just say no oscillations and oscillations. And that's because if it's a stable point and all the trajectories are coming in, then it just looks like this. So it spirals, maybe, into a state of coexistence. Well it spirals to this point of m and p.
Whereas, if it's unstable, then those trajectories are, somehow, being pushed out. If it's unstable, then the trajectories are coming out of that fixed point. In which case, then that's actually precisely the situation in which you get a limit cycle oscillations.
So if the fixed point were unstable, it looks like this because we have some box. The trajectories are all coming in, somehow, in here. But if we have one fixed point here and the trajectories are coming out, that means we have something that looks like this. It kind of comes out.
And given that these trajectories can't cross, the question is, well, what can happen in between? And the answer is, basically, you have to get a limit cycle oscillation. There are these strange situations where you can get a path that is an oscillation that's, kind of, stable from one direction and unstable from another.
We're not going to worry about that here. But broadly, if this thing is coming out, then you end up, in both directions, converging to a stable limit cycle oscillation. So it's a unstable fixed point, then this is the exact situation, which you get a limit cycle oscillation.
OK. So that means that, what we really want to do if we want to ask-- let's try to back up again. We have this pair of differential equations. We want to know will this negative auto regulatory loop oscillate. Now what I'm telling you is that that question for two dimensions is analogous to the question of figuring out whether this fixed point is stable or not. If it's stable, then we don't get oscillations. If it's unstable, then we do. Any questions about this?
So let's see what is is. On Tuesday, what we do is we talked about stability analysis for linear systems. We got what I hope is some intuition about that. And of course, what we need to do here is try to understand how to apply linear stability analysis to this non-linear pair of differential equations.
And to do that, what we need to do is we need to linearize around that fixed point. So what we have is we have these two functions, f and g. And what we want to know is around that fixed point-- so we can define some m tilde, which is m minus this m 0. And some p tilde, which is p minus p 0.
So when m tilde and p tilde are around 0, that's telling us that we're close to that fixed point. And we want to know, if we just go a little away from the fixed point, do we get pushed away or do we come back to where we started? Well we know that m tilde dot, which is actually equal to m dot, as well because m 0 and p 0 are the same. p tilde dot.
We can linearize by taking derivatives around the fixed points. And in particular, what we want to do is we want to take the derivative of f with respect to m. Evaluate at the fixed point. That derivative is, indeed, just minus 1.
So in general, in these situations, what we have is we have derivatives m, m dot p dot, and we have partial of this first function f with respect to m. Partial of g. Oh, no. So this is still f. Respect to p. Down here is derivative g with respect to m. Derivative g with respect to p.
And this is all evaluated around the fixed point m 0 p 0. So we want to take these derivatives and evaluate at the fixed point. And if we do that, we get minus 1 here, derivative m with respect to m times m tilde.
This other guy, when you take the derivative, you get a minus sign with respect to p. So we get a minus sign because this is in the denominator. And then, we have to take derivative inside. So we get n alpha p 0 to the n minus 1. And down, we get a 1 plus p 0 squared. So we took the derivative of this term with respect to p. And we evaluated at the fixed point p 0. Did I do that right?
But we still have to add a p tilde because this is saying how sensitive is the function to changes in where you are times how far you've gone away from the fixed point. And then, again, over here, we take the derivatives down below. So derivative g with respect to m. That gives us a beta m tilde. And then, we have a minus beta p tilde.
All right, so this is just an example of linearizing those equations around that fixed point. So ultimately, what we care about is really this matrix that's specifying deviations around the equilibrium. Right? So it's useful to just write it in matrix format because we get rid of some of the M's and P's.
Indeed, so this matrix that we either call A or the Jacobean depending on-- so what we have is a minus 1. And we're going to call this thing x because it's going to pop up a lot is this minus n alpha p 0. So it's an x beta and minus beta.
And then, we have our simple rules for determining whether this thing is going to be stable or not. It depends on the trace. And it depends on the determinant. So the trace should be negative. And is this trace negative? Yes. Yes because beta-- does anybody remember what beta was again.
AUDIENCE: Ratio of lifetimes.
PROFESSOR: Ratio of lifetimes. Lifetimes are positive. So beta is positive. All right, so the trace is equal to minus 1 minus beta. This is, indeed, less than 0. So this is consistent for stability. Does prove that it's stable? No. But we also need to know about the determinant of a, which is going to be beta, this times this, minus this times this.
So that's minus. And this is a beta times what x was. So this gives us-- we can write this all down just so that it's clear that it has to be positive. So beta is positive. Positive, positive, positive, positive, positive. Everything's positive. So this thing has to be greater than 0.
So what does this mean about the stability of Ethics Point? Stable. Fixed point stable. And what does that mean about oscillations? It means there are no oscillations. Fixed point stable. Therefore, no oscillations.
So what this is saying is that the original, kind of simple, equation we wrote down for negative auto regulation, that thing was not allowed to oscillate mathematically. But that doesn't mean that, if you explicitly model the mRNA, it could go either way. But still, that's insufficient to generate oscillations. However, maybe if you included more steps, maybe it would oscillate. Question?
AUDIENCE: So just to double check-- when you said, no oscillations, you mean stable oscillations?
PROFESSOR: That's right, sorry. When I mean no oscillations, what I mean are indeed, no limit cycle oscillations.
AUDIENCE: This is like a dampened--
PROFESSOR: Yeah. Yeah, so we, actually, have not solved exactly what it looks like. And I've drawn this is a pretty oscillatory thing. But it might just look like this, depending on the parameters and so forth. And indeed, we haven't even proven that this thing has complex eigenvalues.
But certainly, there are no limit cycle oscillations. And I'd say it's really limit cycle oscillations that people find most exciting as because limit cycle oscillations have a characteristic amplitude. So it doesn't matter where you start. The oscillations go to some amplitude.
And they have a characteristic period, again, independent of your starting condition. So a limit cycle oscillation has a feeling similar to a stable fixed point in the since that it doesn't matter where you start. You always end up there. So they're the ones that are really what you would call mathematically nice oscillations.
And when I say this, I'm, in particular, comparing them to neutrally stable orbits. So there are cases in which, in two variables, you have a fixed point here. And at least in the case of linear stability, if you have purely imaginary eigenvalues, what that means is that you have orbits that go around your fixed point.
And we'll see some cases that look like this later on. And this is, indeed, the nature of the oscillations in the Lotka-Volterra model for predator prey oscillations. They're not actually limit cycle oscillations. They're of this kind that are considered less interesting because they're less robust.
Small changes in the model can cause these things to either go away, to turn into this kind of stable spiral, or to turn into limit cycle oscillations. So we'll talk about this more in a couple months. These are neutrally stable orbits.
OK, but what I wanted to highlight, though, is that just because the original, simple, protein only model didn't oscillate and this protein mRNA together doesn't oscillate does not mean that it's impossible to get oscillations using negative auto regulation, either experimentally or computationally. And the question is, what might you need to do to get oscillations?
AUDIENCE: So in the paper they talk about leakage in the negative--
PROFESSOR: OK, right. So in the paper, they talk about various things, including things such as leakage. It terms out that leakage in an expression only inhibits oscillations though. So in some sense, if you're trying to get oscillations, leakage is a problem, actually. And that's why they use this especially tight-- well we're going to talk about that in a few minutes.
They use an especially tight version of these promoters to have low rates of leakage in a synthesis. But what might you need in order to get oscillations in negative autoregulation? Did you have-- have delay. Yes indeed. And that's something that they mentioned in the Elowitz paper is if you add explicit delay.
So for example, if instead of having the repression depend on--OK, I already erased everything. But instead of having the protein, for example, being a function of the mRNA now, maybe if you said, oh, it's a function of the mRNA five minutes ago. And that's just because maybe it takes time to make the protein. Or it takes time for this or that. You could introduce an explicit delay like that.
Or you could even, instead, have a model where you just have more steps. So what you do is you say, oh, well yeah, sure. What happens is that, first, the mRNA is made. But then, after the mRNA is made, then you have to make the peptide chain. Then, the that peptide chain has to fold. And then, maybe, those proteins have to multimerize.
Indeed, if you right down such a model then, for some reasonable parameters, you can get oscillations just with negative auto regulation. And indeed, I would say that over the last 10 years, probably, the reigning king of oscillations in the field of system synthetic biology is Jeff Hasty at San Diego. And he's written a whole train of beautiful papers exploring how you can make these oscillators in simple G network.
So he's been focusing in E. coli. There's also been great work in higher organisms in this regard. But let's say, Hasty's work stands out in terms of really being able to take these models and then implement them in cells and, kind of, going back and forth. And he's shown that you can generate oscillations just using negative auto regulation if you have enough delays in that negative feedback loop.
Are there any questions about where we are right now? I know that we're supposed to be talking about the repressilator. But we first have to make sure we understand the negative auto regulation. So everything that we've said, so far, in terms of the models was all known.
But what Michael wanted to do is ask whether he could really construct an oscillator. And he did this using these three mutual reppressors. We'll say x, y, and z just for now. x represses y, represses z, represses x. And has a nice model of this system that helped him guide the design of his circuits.
So experiments-- as most of us who have done them know-- experiments are hard. So if you can do a week of thinking before you do a year of experimental biology, then you should do that. And what were the lessons that he learned from the modeling that guided his construction of this circuit? Yeah?
AUDIENCE: Lifetime of mRNA.
PROFESSOR: Right. So you want to have similar lifetime of the mRNA and the protein. And this is, somehow, similar to this idea that you need more delay elements because if you have very different lifetimes, then the more rapid process, somehow, doesn't count. It's very hard to increase the lifetime of the mRNA that much in bacteria. So instead, what he did is he decreased the lifetime of the proteins of the transcription factors. In this case, x, y, and z. And you mentioned the other thing that he maybe did.
AUDIENCE: He introduced the leakage, but he didn't mention that that was--
PROFESSOR: That's right. So I guess, he knew that leakage was going to be a problem. I.e, that you want tight repression. So he used these synthetic promoters that both had high level of expression when on but then very low level of expression when being repressed.
He made this thing. And in particular, he looked at it in a test tube. He was able to use, in this case, IPDG to synchronize them. And he looked at the fluorescence in the test tube. So the fluorescence is reporting on one of the proteins. We can call it x if we'd like. But fluorescence is kind of telling about the state.
And if it starts out, say, here, he saw a single cycle. Damped oscillations, maybe. So the question is, why did this happen? So why is it that, in the test tube, he didn't see something that looked very nice? Oscillations.
Noise. And in particular, what kind of noise? Or what's going on? Desynchronization, exactly. So the idea is that, even if you start out with them all synchronized-- you give it IPDG pulls, and they're synchronized in some way-- it may be that, at the beginning, all of them are oscillating in phase with each other.
But over time, random noise, phase drift, and the different oscillators leads to some of them come down and come back up. And then, others are slower. You start averaging all these things together. And it leads to damped oscillations at the test tube level within the bulk. Yes?
AUDIENCE: So what do you mean in the test tube? Like, you just take all these components and put it--
PROFESSOR: Sorry, when I say test tube, what I mean is that you have all the cells. So they still are intact cells. But it's just many cells. So then, the signal that you get the fluorescence is some average over all or sum overall. The fluorescence you get from all those cells.
So there's a sense that this is really what you expect given the fact that they're going to desynchronize. Of course, the better the oscillator in the sense that the lower the phase drift, then maybe you can see a slower rate of this kind of desynchronization. But this is really what you, kind of, expect.
All right. So that's what, maybe, led him to go and look at the single cell level where he put down single cells on this agar pad and just imaged as the cells oscillated and divided. Now there are a few features that are important to note from the data. The first is that they do oscillate.
That's a big deal because this was, indeed, the first demonstration of being able to put these random components together like that and generate oscillation. But they didn't oscillate very well. So they said, oh, maybe 40% of the cells oscillated. And I have no idea what the rest of the cells were doing.
But also, even the cells that were oscillating, there was a fair amount of noise to the oscillation. And the latter half of this paper has a fair amount of discussion of why that might be. And they allude to the ideas that had been bouncing around and from the theoretical computational side demonstrating that it may be that the low numbers of proteins, genes involved here could introduce stochastic noise into the system and, thus, lead to this kind of phase drift that was observed experimentally.
I think that this basic observation that Michael had that he got oscillations, but they were noisy. That is probably what led him to start thinking more and more about the role of noise in G networks and so forth and led, later, to another hugely influential paper that is not going to be a required reading in this class but is listed under the optional reading, if you're interested. But we'll really get into this question of noise more a couple weeks from now.
Were there any other questions about the experimental side of this paper? I wanted to analyze maybe a little bit of simple model of the repressilator. So the model that they used to help them design this experiment involved all three proteins, all three mRNAs.
And what that means is that, when you go and you do a model, you're going to end up with a six by six matrix. And I don't have boards that are big enough. So what I'm going to do instead is I'm going to analyze just the protein only version model of the repressilator. All right.
So what we have here is three proteins. p1 2 3 p1 dot. And we have degradation of this protein. And we're going to analyze the symmetric version, just like what Michael did. So that means we're assuming that all the proteins are equivalent. I'm sure that's not true because these are different promoters and different everything.
But this gives us the intuition. So it's minus p1. And this is protein 1 is repressed by trajectory protein 3. Protein 2 is going to be repressed by protein 1. And then protein 3 is going to be repressed by protein 2.
So this is what you would call the protein only model of the repressilator. Now just as before, the fixed points are when the pi dots are equal to 0. And we get the same equation that we, basically, had before where the equilibrium or the fixed point, again, is going to be given by something that looks like this. So it's the same requirement that we had before.
Now the question is, how can we get the stability of that internal fixed point? It's worth mentioning here that now we have three proteins. So the trajectories are in this three dimensional space. So from a mathematical standpoint, determining the stability of that internal fixed point is actually not sufficient to tell you that there has to be oscillations or there cannot be oscillations because these trajectories are, in principal, allowed to do all sorts of crazy things in three dimensions.
But it turns out that it still ends up being true here that when this internal fixed point is stable, you don't get oscillations. And when it's unstable, you do. But that, sort of, didn't have to be true from a mathematical standpoint.
All right. Now since this is now going to be a 3 by 3 matrix, we're going to have to calculate those eigenvalues. Now how many eigenvalues are there going to be? Three? OK. So this thing I've written in the form of a matrix to help us out a little bit. But in particular, we're going to get the same thing that we had before, which is the p1 tilde.
So these are deviations, again, from the fixed point. And we got this matrix that's going to look like this. Minus 1, again, 0. It's the same x that we had before conveniently still on the board. So this is just after we take these derivatives.
And then, we have p1 tilde, p2 tilde, and p3 tilde. Now what we need to know is, for this Jacobian, what are going to be the eigenvalues? For this thing to be stable, it requires what? What's the requirement for stability of that fixed point? That p0?
AUDIENCE: [INAUDIBLE].
PROFESSOR: OK. Right. For two dimensions, this trace and determinant condition works. It's important to say that that only works for two dimensions, actually, the rule about traces and determinants. So be careful. So what's the more general statement? Yeah.
AUDIENCE: Negative eigenvalues.
PROFESSOR: Exactly. So in order for that fixed point to be stable, it requires that all the eigenvalues have real parts less than 0. So in order to determine the stability of the fixed point, we need to ask what are the eigenvalues of this matrix. And to get the eigenvalues, what we do is we calculate this characteristic equation, this thing that we learned about in linear algebra and so forth.
What we do is we take-- all right, this is the matrix A, we'll say. This is matrix A. And what we want to do is we want to ask whether the determinant of the matrix A minus some eigenvalue times the identity matrix. We want this thing to be equal to 0. So this is how we determine what the eigenvalues are.
And this is not as bad as it could be for general three by three matrices because a lot of these things are 0. So this thing is just this is the determinant of the following matrix. So we have minus 1 minus lambda 0. This thing x that's, in principle, bad.
Minus 1 minus lambda 0. Getting 0 x minus 1 minus lambda. Now to take the determinant three by three matrix, remember, you can say, well, this determinant is going to be equal to-- we have this term. So this is a minus 1 plus a lambda times the determinant of this matrix.
And then, we just have that's this. The product of these minus the product of these. So this just gives us this thing again. So this is actually just minus 1 plus lambda cubed. Next term, this is 0. That's great. We don't need to worry about that.
The next one, we get plus. We have an x. Determining here, we get, again, x squared. So this is just an x cubed. We want the same equal to 0. So we actually get a very simple requirement for the eigenvalues, which is that 1 plus the eigenvalues cubed is equal to this thing x cubed.
Now be careful because, remember, x is actually a negative number. So watch out. So I think that the best way to get a sense of what this thing is is to plot it. Of course, it's a little bit tempting here to just say, all right, well, can we just say that 1 plus lambda is equal to x?
No. So what's the matter with that? I mean, it's, sort of, true, maybe, possibly. Right. So the problem here is that we're supposed to be getting three different eigenvalues. Or at lease, it's possible to get three different eigenvalues. So this is really specifying the solution for 1 plus lambda on the complex plane.
So the solution for 1 plus lambda we can get by thinking about this is the real part of 1 plus lambda. And this is the imaginary part of 1 plus lambda. And we know that one solution is going to be out here at x. This distance here is the magnitude of x.
Now the others, however, are going to be around the complex plane similar distances where we get something that looks like this. So these are, like, 30, 60, 90 triangle. So this is 30 degrees here because what you see is that, for each of these three solutions for 1 plus lambda, if you cube them, you end up with x cubed.
So this guy, you square it. Cube. You end up back here. This one, if you cube it, you start out here, squared, and then cubed comes back out here. Same thing and this goes around somehow. All right, so there are three solutions to this 1 plus lambda. And there are these points here.
Now, of course, it's not 1 plus lambda that we actually wanted to know about. It was lambda. But if we know what 1 plus lambda is, then we can get what lambda is. What do we have to do? Right, we have to slide it to the left.
So this is the real axis. This is the imaginary axis. 1. So we have to move everything over 1. Now remember, the requirement for stability was that all of the eigenvalues have real parts that were negative. That means the requirement for stability of that fixed point is that all three of these fixed points are in the left half of the plane.
So what you can see is that, in this problem, the whole question of stability and whether we get oscillations boils down to how big this thing is. What's this distance? If this distance is more than 1, then we subtract 1, we don't get it into the left part of the plane. OK, I can't remember which case I just gave.
But yeah, we need to know whether this thing is larger or smaller than 1. And that has to do with the magnitude of x. So if the magnitude of x-- do you guys remember your geometry for a 30, 60, 90 triangle?
All right, so if the magnitude of x-- and this is indeed the magnitude of x. This short edge on a 30, 60, 90 is half the long edge, right? So what we can say is that this fixed point stable, state fixed point, is if and only if the magnitude of x is what?
Lesson two. OK. That's nice. And if we want, we could plug in-- just to ride this out. This is n alpha p 0 n minus 1.
So it's useful, once you get to something like this, to try to just ask, for various kind of values, how does this play out? What does the requirement end up being? And a useful limit is to think about what happens in the limit of very strong expression? So strong expression corresponds to what?
AUDIENCE: Big alpha?
PROFESSOR: Big alpha. Yes, perfect. And it turns out, big alpha is a little bit-- OK, and remember we have to remember what p0 was. p0 was this p0 times 1 plus p0. All right, so this is the requirement.
And actually, if you play with these equations just a little bit, what you'll find is that, if alpha is much larger than 1, then this requirement is that n is less than 2 or less than around 2. This is saying, on the flip side, the fixed point is stable if you don't have very strong cooperativity and repression. And the flip side is, if you have strong cooperativity of repression, then you can get oscillations because this interior fixed point becomes unstable.
So this is also saying that n greater than around 2 leads to oscillations. And this maybe makes sense because, when you have strong productivity in the repression there, what that's telling you is that it's a switch like response. And in that regime, it maybe becomes more like a simple Boolean kind of network where, if you just write down the ones and zeroes, you can convince yourself that this thing maybe, in principle, could oscillate.
Now if you look at the Elowitz repressilator paper, you'll see that he gives some expression for what this thing should be like. And it looks vaguely similar. Of course, there he's including the mRNAs, as well. But if you think that this was painful to do in class, then including the mRNAs is more painful. Are there any questions about this idea? Yeah.
AUDIENCE: So in the paper, did they also only do the stability analysis to determine the--
PROFESSOR: I think they did simulations, as well. So the nature of simulations is that you can convince yourself that their exist places that do oscillate or don't oscillate. Although, you'll notice that they have a very, kind of, enigmatic sentence in here, which is that it is possible that, in addition to simple oscillations, this and more realistic models may exhibit other complex types of dynamic behavior.
And this is just a way of saying, well, you know, I don't know. Maybe someday because once you talk about six dimensional system, you never know if you've explored all of the parameter space. I mean, even for fixed parameters, you don't know if you started at all the right locations.
You can kind develop some sense that, oh, this thing seems to oscillate or seems to not oscillate. And it does correspond to these conditions. But you don't know. I mean, it could be that, in some regions, you get chaos or other things. Right?
So it's funny because I've read this paper many times. But it was only last night when I was re-reading it that I kind of thought about that sense like, yeah, I'm not sure either what this model could possibly do. Yes?
AUDIENCE: In this linear analysis the three x's are the same.
PROFESSOR: That's right.
AUDIENCE: Because they're non-dimensional?
PROFESSOR: All right. So the reason that the three X's are the same is because we've assumed that this really is the symmetric version of the repressilator because we're assuming that all of the alphas, all the ends, all the K's, everything's the same across all three of them. So given that symmetry, then you're always going to end up with a symmetric version of this.
So I think if it were asymmetric and then you made the non-dimensional versions of things, I think you still won't end up getting the same X's just because, if it's asymmetric, then and something has to be asymmetric. Yes?
AUDIENCE: [INAUDIBLE] so large alpha leads to--
PROFESSOR: Yes, OK. We can go ahead and do this. So for large alpha, this fixed point is going to be-- p0 is going to be much larger than 1. So this is about p0 to the n plus 1. We can neglect the 1 for large alpha.
And then and then what we can say is that, over here, for example, if we multiply both sides by-- p0 squared, p0 squared, so multiply it by one. Then this down here is definitely alpha squared. And then, up here, what we have is p0 to the n plus 1, which we decided was around alpha for a strong alpha.
So that gives us alpha times alpha divided by alpha squared. So this actually all goes away for large alpha. So then, you're just left with n less than 2. Did that--
AUDIENCE: Sorry. Where did that top right equation come from?
PROFESSOR: OK, so this equation here is this is the solution for where that fixed point is. So in this space of the p0's, if you set the equations for p1, p2, p3, if you set that equal to 0, this is the expression always for a large alpha, small. So this is a need be location of that fixed point.
And it's just, as alpha is large, then we get that p0 to the n plus 1 is approximately equal to alpha. And this is for alpha much greater than 1. And in that case, all of these things just go away. And you're just left with n less than 2.
So for example, as alpha goes down in magnitude, then you end up getting a requirement that oscillations require a larger n. We'll give you practice on this. All right, so I think I wrote another-- if I can find my-- you can ask for alpha equal to 2. What n required for oscillations.
I'll let you start playing with that. And I will make sure that I've given you the right alpha to use. So in this case, what we're asking is, instead of having really strong maximal expression, if instead expression is just not quite as strong, then what we'll find is that you actually need to have a more cooperative repression in order to get oscillations.
And that's just because, if alpha is equal to 2, then we can, kind of, figure out what p0 is equal to. 1. Right. Great. So the fixed point is at one. That's great because this we can then figure out. Right?
So this is 1 plus 1 square. That's a four. 1. 2. So this tells us that, in this case, we need to have very cooperative repression. We have to have an n greater than around 4 in order to get oscillations in this protein only model. Yes?
AUDIENCE: It is kind of strange that even for a really big alpha you still need n greater than sum.
PROFESSOR: Yeah, right, right. So this is an interesting question that you might think that for a very large expression that you wouldn't need to have cooperative repression at all. Right? And I can't say that I have any wonderful intuition about this because it, somehow, has to do with just the slopes of those curves around that fixed point. And it's in three dimensions.
But I think that this highlights that it's a priori if you go and say, oh, I want to construct this repressilator, it's maybe not even obvious that you want it to be more or less. I mean, you might not even think about this idea of cooperative repression. You might be tempted to think that any chain of three proteins repressing each other just, kind of, has to oscillate.
I mean, there's a little bit of a sense. And that's the logic that you get at if you just do 0's and 1's. If you say, oh, here's x. Here's y. Here's z. And they're repressing each other. Right? And you say, oh, OK, well if I start out at, say, 0 1 0 and you say, OK, that's all fine. But OK, so this is repressing.
And it's OK, but this guy wasn't repressing this one. So now we get a 1, 1, 0, maybe. Then you say, oh, OK. Well now this guy starts repressing this one. So now it gives us a 1 0 0. And what you see is that, over these two steps, the on protein has shifted. And indeed, that's going to continue going all the way around.
So from this Boolean logic kind of perspective, you might think that any three proteins mutually repressing each other just has to oscillate. And it's only by looking at things a little bit more carefully that you say, oh, well, we have to actually worry about this that you really have to think about you want to choose some transcription factors that are multimerizing and cooperatively repressing the next protein just to have some reasonable shot at having this thing actually oscillate.
AUDIENCE: So in this, we might still be able-- I mean, oscillations like this might still [INAUDIBLE] but just not like, maybe, oscillations around some stable fix point or something. Like, they're just not limit cycle oscillations. Do you think that in a [INAUDIBLE] there would probably still be some kind of oscillations somewhere. Just not this beautiful limit cycle kind.
PROFESSOR: Yeah, my understanding is that in, for example, this protein only model of the repressilator that if you do not have cooperative repression, then it really just goes to that stable fixed point. Of course, you have to worry about maybe these noise-induced oscillation ideas. But at least within the realm of the deterministic, differential equations, then the system just goes to that internal fixed point that's specified by this. Question?
AUDIENCE: Can we think like that the cooperation, sort of, introduced delay?
PROFESSOR: That's an interesting question. Whether cooperativity, maybe, is introducing a delay. And that's because, after the proteins are made, maybe it takes some extra time to dimer and so forth.
So that statement may be true. But it's not relevant. OK? And I think this is very important. This model has certainly not taken that into account. So the mechanism that's here is not what you're saying.
But it may be true that, for any experimental system, such delay from dimerization is relevant and helps you get oscillations. Right? But at least within the realm of this model, we have very much not included any sort of delay associated with dimerization or anything. So that is very much not the explanation for why dimerization leads to oscillations here.
And I think this is a wider point that it's very important always to keep track of which effects you've included in any given analysis and which ones are not. And it's very, very common. There are many things that are true. But they may not actually be relevant for the discussion at hand.
And I think, in those situations, it's easy to get mixed up because it still is true, even if it's not what's driving the effect that is being, in this case, analyzed. We're out of time. So we should quit. On Tuesday, we'll start by wrapping up the oscillation discussion by talking about other oscillator designs that allow for robustness and tunability. OK?