Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Instructor: Prof. Gilbert Strang
Recitation 3
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR STRANG: Shall we just start on this review session? So, any questions on anything from Chapter one, anything from those first seven lectures is very, very welcome. So this morning finished the serious part of what we'll do in the chapter with positive definite matrices. And we'll see a lot of those fortunately. They're the best. So questions about, I hope you look in the book, at other problems in the problem sets as well as the ones I suggest. And then I can, anyway. Ready for any questions. Okay. Which problem is it? In section? Section 1.6, problem 27, what have I done there? Oh, okay, that's good. So it's about positive definite matrices. May I just put on the board what the central question is? Just put these matrices up. We're given that H and K are positive definite. And then the question is, what about these block matrices. Do I call them M and N? One is the block matrix that looks like that. And another one is the block matrix that looks like this. So those are both symmetric. We're allowed to ask, are they positive definite or negative definite, because they passed the first requirement. They're symmetric. We can discuss them. Because of course H and K each were symmetric. The transpose of this would bring K transpose down here, but that's K, so all good.
So the question now. Of these guys to those guys I guess, yes. Good question. So this guy has, let's take eigenvalues first. So this guy has some eigenvalues, say lambda_1 to lambda_n. And this guy, we'll suppose they're the same size, so they don't have to be. Maybe I shouldn't, but I will. This has some other eigenvalues, maybe e_1 to e_n for eigenvalue. And then the question is, okay, what about the eigenvalues of that combination? And what about this? So it's a good question, I think for all of us to practice what just came up in the lecture. The idea of block matrices. So looking here at eigenvalues I could also look at pivots.
Pivots would be interesting to look at, too. Maybe I'll start with pivots. Can I? Did you think? What would be the pivots of M? If I start elimination on M what will I see for pivots? Well, I start up in the usual left-hand corner and work down. So what am I going to see first? I'm going to see the pivots of H. It won't even know, by the time I had halfway there, it won't even have seen K. And then, that'll be fine. And then this will be, what's going to happen? This is all zeroes. So never get touched, right? So when I get down to the second half I see all zeroes here. K is still going to be sitting right there. Nothing happened. Because when I did these eliminations nothing changed with K. So the rest of the pivots will be the pivots of K. Good.
Now, we might hope for the same thing with eigenvalues and probably that's going to happen. This is like a diagonal matrix. And actually, what words would I use? Block diagonal. I'd call that matrix block diagonal. And those are very nice matrices. That tells us that the big matrix, for all practical purposes, is breaking up into these smaller blocks. Actually MATLAB will search for a way to reorder the rows and columns to get that in case it's possible. So here it's in front of us.
Let's see if we can figure out. That lambda_1, I believe, is also an eigenvalue of M. So it was an eigenvalue of H. So that this, the fact that it has that eigenvalue lambda_1 means what? That H times this times some vector y is lambda_1*y, right? If that's an eigenvalue it's got an eigenvector and let's call it y. Now this is a good question. I believe this block matrix also has eigenvalue lambda_1, and what's its eigenvector? What could I multiply M by to get lambda_1 times the same thing? Can you see what? Of course I'm thinking that y is going to help but it's grown now. So what would be the eigenvector here? When I multiply by M it'll just come out right with the same eigenvalue? y_1, or y rather, and then? And then zero, good. [y; 0].
Because if I multiply, can I put in what M really is? The H and K. H there, K there. When I do that multiplication I get lambda_1*y. When I do this multiplication, see I've just, that's a zero block, zero, so I got a zero. Perfect. So the eigenvectors of H just sit with a zero in the K part and produce an eigenvector of the block matrix with the same lambda_1. So you can see then, we get the whole picture. The eigenvalues are just sitting there and the eigenvectors are there.
Now maybe you got all that and wanted-- well I haven't said anything about N, Sorry. Everybody thinks more about N. So what's the thing with N? What would you say about N? If you look at that matrix, suppose I don't even tell you it's positive definite at first, would you say that looks like a invertible or singular matrix? Everybody's going to say singular. And why would you say that's singular? Well, the determinant of a block matrix, this morning I said do whatever you like with block matrices. But I have to admit that if I had a bunch of general blocks, if I had to take the determinant of that, and of course everybody's remembering Professor Strang doesn't like determinants, if I had to take the determinant, I'd have to do the whole thing. The separate determinants would not tell me the story, usually. So determinants are a bit tricky. But up here the determinant will come out zero.
I guess what I would hope your internal test for a singular matrix is, are the columns independent? And then the matrix is invertible. Or are they dependent? Do you have some columns that are in the same direction as other columns, same direction as combinations of other columns? If you look at the columns of that, say column one, so column one is the first column of K repeated. What do you think about the columns of that matrix, that block matrix N? Do you see that same column showing up again? Yeah. That very same column, which is the first column of K, again twice, is going to show up right there, first column of K again. So this matrix has two identical columns. No way it could be invertible.
And in fact, you can tell me what vector, I'm always saying are the columns independent? Here, no, they're dependent. And then you can tell me an x. So this is my block matrix N. I want to know an x so that the result is zero. That's really my same indication. We found two identical columns. What would be the x? Well, you have to tell me more than one, minus one because I've got a big x there. Yeah I've gotta make it big enough, but essentially it's the one, minus one, thanks. And enough zeroes in there and enough zeroes in there.
So the fact that that vector gets taken to zero is the same thing as saying that one of this column minus one of this column gives zero. In other words, the columns are the same. And of course, by doing this we're seeing the one and minus one could have gone into position two there, position three. So we've got a whole bunch of vectors. This matrix N, this [K, K; K, K] has got a whole lot of vectors that it takes to zero. What I would say it has a large null space. A large space of vectors that it takes to zero. So that's a really useful exercise. I'm delighted you asked it. Now I'm ready for more.
Could do. Exactly, row reduction. I should look to see what would happen in elimination. Well, elimination would go swimmingly along for the first part because it's only looking here. But then what would I have after the first half of elimination? Well I'd have I suppose whatever that K changed to, elimination. What should we call it? U or something? When I did these row steps that matrix turned into this upper triangular matrix. And maybe you can tell me what will have happened at the same time to the rest? What will I see sitting here if I just do ordinary elimination and I'm just looking there and using the pivots and so on, I'll see? It'll be U because whenever I do on the left side I'm doing to the whole row. And now, the main point is, what will I see? Now elimination, keep going, keep going. Do elimination to clear out this column, this whole bunch, right? Elimination.
And now what am I going to see in that corner? All zeroes, right. So that's telling me that the matrix has just got half of the eigenvalues positive, half of the pivots are positive. The second half all zeroes. So I guess, here I've found an eigenvector with what eigenvalue? That's looking like an eigenvector to me if we're thinking eigenvectors. And what's the eigenvalue that goes with it? Zero. Because Nx is 0x. You can either think of it as Nx=0 if you're thinking about systems of equations. Or Nx=0x if you're thinking that that guy is an eigenvector with eigenvalue zero.
So I'm pretty happy. I mean many of you will have spotted this. Probably perhaps all. But I'm happy that's an example that just shows how you have to think big with block matrices I guess. Good. Okay on that? What else, thanks.
That's true. And that's really all I've done so far is those four examples. I think that language of fixed-fixed and fixed-free really comes, I mean I used it early about those four matrices, but it's really going to show up at the next lecture, Friday, when I have a line of springs and the matrices that come out of that. So Friday we'll finally be on those first four. A fifth matrix will appear in this course finally. Of course, it's going to be related to the first ones, naturally, but we'll move to, we'll see something new and then we'll see the fixed-free idea again for those. So if that can wait until Friday, you'll see some different ones. Good. Questions, thoughts. You can ask about anything.
Maybe I can ask. Any thoughts about the pace of the course? This is sort of a heavy dose of linear algebra, right? Of course, the answer maybe depends on how much you had seen before. So those who haven't seen very much linear algebra at all really got quite a bit quickly here. Because many courses on linear algebra never reach this key idea of positive definiteness that ties it all together. So you've seen quite a bit, really. Of course, we've concentrated on symmetric matrices and there's a whole garden or forest or zoo of matrices of different types.
So what matrices have we seen? Symmetric matrices and then their eigenvectors were orthogonal and we could say orthonormal. So that gave us, I don't know if you remember this part, which when we wrote it down I said, big deal. That's very important. That's this principal axis theorem. These Q's, what kind of a matrix is Q? It's the eigenvector matrix. And for symmetric matrix, so this is the eigenvector matrix. And what do we know about it? In the special case of symmetric K? What do we know especially about the eigenvectors then? They're orthogonal. We can make them orthonormal. So this will be an orthogonal matrix. And that was a matrix with Q transpose was the same as Q inverse. Normally we would see the inverse there, but for these we can put the transpose. Here's one type of matrix, symmetric, very important. Here's another type of matrix, orthogonal matrices. And of course, many, many other varieties. Well here we have a very nice matrix, so that matrix is diagonal. Right, that's just the eigenvalues, so that's a diagonal matrix.
And what do we know, if K is positive definite, let's just, this was for any symmetric one. So what's special if K is positive definite? Somehow the positive definiteness should show up here. And where does it show? Positive eigenvalues, exactly. The Q could be any, any Q would be fine. But we would see positive eigenvalues.
Oh, here's a little point about eigenvalues. Suppose I have my matrix K. And it's got some eigenvalues. Now let me add four times the identity to it. What are the eigenvalues now? What are the eigenvectors now? What's changed and how and what hasn't changed? Because that's a pretty easy, the identity matrix is always the easy one for us to know what's happening. So what is happening to the eigenvalues now? If K had these eigenvalues lambda, what are the eigenvalues of K+4I? You add? You add four, yeah. The eigenvalues of this are the eigenvalues of K plus four. That is just like shifting the matrix, you could think of it is adding four along the diagonal will add four.
And the eigenvectors would be exactly the same ones. I would have Kx would agree with lambda*x. And 4Ix would agree with 4x. So that proves it. Good to see what you can do, the limited number of things that you're allowed to do without changing the eigenvectors, and therefore you can spot the eigenvalues right away. The limited things you can invert, you can shift like this, you could square it, cube it, take powers, things like that.
I'm going to look to you now for giving me a lead on something that is interesting or not. Yes, thanks. Go ahead. Oh, I see okay, yes. I see. Alright. So that's page 64 of the book. Well, so that's a problem that physicists love. I don't know how much I can say about it here, to tell the truth. Just to mention. Do they use a minus sign? Probably they do. So their equation is minus the second derivative of u plus (x squared)*u, and they are interested in the eigenvalues, equal lambda*u. The case that we've done in class was without this (x squared)*u term, right? The absolutely most important case is the second derivative of u equal lambda*u. The eigenvalues were, or what were the eigenvectors in that case? What were the eigenvectors of the second derivative before there was any (x squared)*u, any potential showing up?
They were just sines and cosines, right? Sines and cosines have the property that if you take two derivatives you get them back with some factor lambda. Now let me just look at that problem without saying much about it. First of all, the first thing I want to know is have I got a linear problem here? Have I got a linear equation? Because that's where I talk about eigenvalues. So in the matrix case, I'd say I have a matrix. K times an eigenvector. That matrix represents something linear. It's just, all the rules of addition work here. Here it is linear. It is linear.
What I'm trying to say is, I just call that a variable coefficient and that's what we're going to see in Chapter two. The material or something could lead to some dependence on x. But u is still there, just linearly. In other words, this is a perfectly okay linear operator and am I imagining that it's positive definite? Let's see. This part with the minus sign was positive definite, right? Well, at least semi-definite. So let me just remember the most important case. If I look at this equation, d second u/dx squared equals lambda*u. So that's the eigenvalue, eigenfunction problem for our good friend. What do I say about the eigenvalues now? What can you tell me about the eigenvalues of that? Mostly positive. Because they were sort of omega squares. But I mean zero could be an eigenvalue, right? What would the eigenfunction be for lambda equal zero? If I wanted to get zero here, if I wanted a zero on the right side, what functions u could give me zero? Constant function. Yeah, the constant function is certainly there as a possibility.
But anyway, I would say this is positive semi-definite at least. And this part? How do I think about that as a big matrix? I think of it sort of like a big matrix with x squared running down the diagonal. With a matrix, you could say walking down the diagonal because it's n steps. For differential equations, maybe running is the right word. Because it doesn't jump, it's just bzzz all the way from zero squared to whatever. Anyway, that would correspond to a diagonal matrix, but not constant diagonal. Diagonal, but not constant diagonal. Because this x squared number is changing.
It's like a spring, it's like a bunch of springs in which the first spring maybe has a spring constant of one. And then we have a tighter spring and then a very tight spring and so on, more and more, higher and higher constants there. Well, I'm just speaking very roughly here. Because variable coefficient, variable material properties, springs of different elasticities, we're ready to move to that. Our problems up to now, the springs were all the same. The bar, if it was a bar, was uniform. And now this would be a step forward. But now, of course, this specific problem just happens to have a solution that physicists love. It has a meaning to physicists, not to me. And the eigenfunctions have a meaning and they're famous functions. It's just glorious. So you could say that's the special problem, the way we had four special matrices in 18.085, that would be a similar special problem in quantum mechanics.
Let's turn to something entirely different. Questions about any topic. Or I can ask some and you can take this, maybe that's one way to review. Go ahead. Thanks. Number 20 of 1.6. 1.6 is a section, oh, no. That's positive definite notes so I'm okay with that. I see that I did ask you a question on the homework from 1.7 which I may not get to cover in lecture, but give it a shot anyway. So what's 20? Oh, okay, that's good. Without multiplying out the matrix. So it's this Q*lambda*Q transpose. So I'm telling you in that question what Q, lambda, and Q transpose are. The Q is this [cosine, minus sine; sine, cosine]. The lambda is two and five, I think, in that question. And the Q transpose of course is [cosine, sine; minus sine, cosine]. And if I've told you that those are the numbers then you could multiply those together to get K. But you can tell me, this is like K exposed. The matrix is like, we're told more than we would know. If I multiply it all together, I wouldn't see that the eigenvectors are these guys, that the eigenvalues are these guys.
So what, without looking to see, what are the eigenvalues of this matrix K if we multiplied it all together? What would the eigenvalues actually be? Two and five, right, because we built it up that way. What would the determinant be? Now what do we know about determinants? It would be ten is the right answer. What's the right way to see that? Well, the determinant is always the product of the eigenvalues, isn't it? These guys have determinant ten anyway. And if I hadn't normalized, so this had some bigger determinant, this would have some smaller determinant. Their inverses, their determinants will give me the one and there's the ten.
What else could I ask about or did I ask about for that? The eigenvectors, okay. The eigenvectors of the matrix, what are they? They're these columns that are sitting here for us, they're those two columns, right. And would you like to just check that if the-- I believe that column is an eigenvector. And which do you think, two or five, is its eigenvalue? That goes with this first column. Everybody's going to say two and that's right. And do you want me to just take that matrix times this proposed eigenvector and just see if it's going to work? Suppose I just do all and just see, sure enough this will be an eigenvector. So what do I have at this point? Can you do this times this first? What do I get? c squared plus s squared is one. And -cs plus cs is zero. So at that point I have [1, 0]. Now comes this matrix. So what do I have after that matrix speaks up? [2, 0]. And now I take two times this and what do I get? Or that matrix times the [2, 0]. How do you multiply a matrix times that [2, 0] vector. Here's the good way to think of it. It's two times the first column. And zero times the second. So the net result of the whole deal was two times that first column. Which is exactly saying that this is an eigenvector. When I did all that it came back again. Scaled by two. So that's a good example.
And then, is the matrix positive definite? That connects to today's lecture. What test would you use to show that the matrix is positive definite? The eigenvalues, yeah. The eigenvalues are sitting there. Two and five, both positive. If I changed one of those signs, then it would no longer be positive definite. It would still be symmetric, I'd still have the eigenvectors, but the eigenvalue would have jumped to minus five.
I think this sort of helps out. I guess I hope that as I'm doing these things, you're ahead of me or with me in the calculation and you just have to do a bunch of these to get confidence that you've got the right thing. Okay, yes? 1.6, 24. Is that also a homework problem? Alright, but you guys are reading the rest of the book, right? Not only the homework questions. Ah. Oh, dear. 24, that's a very good question. About this, yeah. Right. It's a good question. And if today's lecture had been, well it ran a little late. But if we ran another 20 minutes late, I could have done this. I'll just say what's in that problem. And then we'll see it again. So what's in that question? Let me write down what it is.
So I have a positive definite matrix K, right? And then I've got its energy. I'm using u rather than x, so let's use u. So my u transpose Ku, or like x transpose Kx today. That is this bowl-shaped figure, right? If I graph this on the u_1, u_2 maybe up to u_n, all in the base. And now I have the picture. So I'm in n+1 dimensions. The other dimension is this one. Then that's the one where I might get this bowl-shaped guy. And I've called that energy. In many, many physical problems there is a factor of 1/2. And it's going to be nice to have that factor of 1/2. So that won't change anything, just half as big.
So what is the minimum value of that energy? And what is the minimum value of this, if I said minimize that, you could do it right away. It'd be a zero. Now I'm going to introduce a linear term. This was a quadratic term and it had u squareds in it. So the linear term is going to be u transpose f is the shorthand for it. And of course, we all know that that stands for u_1*f_1, u_2 all minus, u_2*f_2 and so on. However many dimensions I'm in. You can imagine I'm in two dimensions. So it's -u_1*f_1 - u_2*f_2. So what I'm saying is that minimizing just this was like, too easy, right? The answer was zero. Nobody's interested in that for very long. But now it is much more interesting when I get a linear term in there.
So what happens now? Well, the effect of that linear term is to shift that bowl sorta over and down a little. So that instead of sitting where I drew it, let me erase it. If I now graph this function, this is my function of u, this is still the most important part, but now I have a first order term. And the result is, it still goes through here. Right? Why does it still go through that same point? Because if I take u_1 and u_2 to be zero, I get zero. So I still get zero there. But the bowl has shifted. It's more like something here. And it still has a minimum because this is still the all-important term. But it's just moved over and down. So it has the minimum value. It actually goes below zero, but if I look at it if I'm sitting at the minimum and looking I'm seeing a bowl going up, right. So I hope that picture shows-- And now, of course, that's the geometry. In other words, the same geometry just moved the thing over and down.
But the algebra is, where is the minimum? What is the value of that minimum? And this problem, 24, is one way to do the minimum. One way to do it. But actually, if you didn't like linear-- well I won't say didn't like linear algebra, that's against my religion. So if you like calculus and you said, wait a minute, if you give me something you want me to minimize, what will I do? I'll set derivatives to zero.
And can I just jump to the answer? Oh, what derivatives do I set to zero now, for the minimum here? It's the first derivatives. And they're first derivatives with respect to? I look at df/d what? You see I've already given it away. These are going to be partial derivatives. Why's that? Because I've got two directions. So I have a df/du_1=0 and a df/du_2=0. In other words, when I sit here at the bottom I'm seeing this whole bowl above me. If I go along the u_2 direction it should go up and if I come along the u_1 direction, goes up. But it's flat at the bottom both ways.
So what's my point here? If you like calculus, you'll get to two equations. And I just want to say what those equations are, because they're all important. Suppose we only had u_1 and nothing else. Then this would just be a parabola and the derivative of this would be at 1/2 K*u squared. Suppose n is one. I'm only in one. So what's the derivative of 1/2 K*u squared? The derivative. So I'm looking for, if this was 1/2 K*u squared and I took the derivative with respect to u, it would be? It'd be Ku. And it works here in the matrix case. And what would be the derivative of u, transpose of u times f, if u was just a number and if u was just one thing and f was a single number, the derivative would be? f, yeah. It'd be f.
That's the system. I've jumped to the answer. That this set of two or n equations in matrix language would just be, and I'll even write it better as Ku=f. That tells me where the minimum is. The minimizing guy is, so this is in the base and then the thing is dropping down. I still have to figure out what's the bottom value. But I've now identified where the minimum occurs. So you get two questions about a minimum. Where is it? What value of u gives the minimum? And at that point, at that lowest point, how low is it? The one thing you've gotta remember is that when you minimize that quadratic, you get that system of equations. And then, of course, the answer, you have to solve that system.
But this goes back to what I said at the first minute of today. That we have two ways of looking at a problem. Usually we go directly to the equations. Sometimes the problem comes naturally to us as a minimum problem. Like we have to minimize the cost, we want to build a new school or something. So we've got some cost function that we minimize that will lead, through calculus or linear algebra, to this. So I've done everything but answer the question 24. We only checked the one by one case to see that that's the right equations, derivative equal zero. And now you could use calculus as I said.
But if I answer that question, well let me just do a little. The idea of that question 24, so that was what? 1.6, 24, or something. Is that right? Yeah. Is that I could rewrite this to make it clear. I think it's u minus K inverse f, transpose K times u minus K inverse f. And then a minus 1/2 f transpose K inverse f. Actually, my best friend in China told me this trick. And I didn't give him credit for it in the book. But I should have done. I just think that if you multiply all this out, you'll get this. It's what I would call an identity. That just simply means that it's just true for every u. It's true for everything. Can I try to multiply some of that out? Just so you kind of see it. Yeah, that's what I mean, multiply it out. You've got it. This thing would give me four terms. It'd be this transpose times that times that. Which is my guy here. And then I'll have something. It's just like numbers. Then this thing times that times this. And this thing times that times that. And this thing times that times that. Let me do that last one. What happens when I do the 1/2 and this transpose times the K times this. So I'm using the distributive, whatever, laws. Let's just do that particular term and see what we're getting. So I have 1/2 of the minus K inverse f transpose. So how do I write that? Shoot. Well, it's something times something transpose. So what do I have to do? Opposite order. So I have a minus, an F transpose and the K inverse transpose. You're seeing all this stuff. And then comes the K and then comes the minus, oh again the minus. So that'd be a plus, right? Times K inverse times f.
So that's one of the terms. That's one of the terms that shows up. And what good is that one? So that's one. You could say that's the longest term. That's the one with the messiest term. But you can fix it. What would you do with that? K times K inverse is? Identity. So we can forget that. And now we're there. That's 1/2, f transpose, f on this side. Oh, what's K inverse transpose? It's the same as K inverse because K is symmetric, so its inverse is symmetric. So that transpose doesn't change the matrix. In other words, this term will show up and this term is oh! Nope, sorry. I was going to goof here. I was going to say this is the same as this, but it's not, right? Why not? Because it's positive. And this guy is negative. Has my good friend Professor Lin messed up? Nope.
What's going to happen now? The two that I didn't do, you see, the 1/2 u transpose K u is here. Then comes this one, which I didn't do, and then another one that I didn't do, and then this one that I did. They'll all be the same. So they'll all contribute with their plus sign or minus sign and the net result will be a perfect match, yeah. So I won't wear out your patience by doing that.
But I do want to make the point. What was Professor Lin's point in suggesting to write it in this more complicated way? His point was we could see this is just a constant. Doesn't depend on u. And now I can see what value of u would make this as small as possible. Remember, I'm still trying to minimize. This part, I can't make it bigger or smaller, it's fixed. It's u that I can play with. So what u should I choose to make this part smaller? Bear with me. What u will make this big mess as small as I can get it and how small can I get it? If I take u to be K inverse f, then this is zero, this is zero, I get zero. And that's my claim, that u equal K inverse f is the best possible, is the minimizer.
And how do I know that I can't make this more negative than the zero? I can get it down to zero by making that to be the zero vector. But how do I know I can't make it below zero? The K is positive definite and I'm sitting here with some x transpose and some x. The x has this sort of messy form but it's an x and here's its transpose. So this is an x transpose Kx and can't be brought below zero when K is positive definite. Good.
So we've said a good bit about positive definite here, but happy to think-- Yeah, thanks. In fact, finally a fifth. Exactly. Thanks, perfect question. And let me answer it clearly. Each of those five tests completely decides positive definite. So the five tests are all equivalent. If a matrix passes one test, it passes all five. So that's great, right? So we just do whichever test we want. Or whichever way we want to understand the matrix.
I was going to add, I didn't say a lot about this one. Can I just add a note about a MATLAB command? The command chol(K). That's the first letters in the name Cholesky. So chol is the first four letters of this name. And that's a MATLAB command. If I've defined a matrix that's positive definite and I use that command, out will pop an A, one particular A that works. Out will pop an A that makes this work. It'll be a square A and it'll be upper triangular. So out will pop, so this command is very, very close to the LU but it's just sort of the appropriate version, symmetrized version of elimination when you have a positive definite symmetric matrix. If your matrix is not positive definite, MATLAB will tell you so. So it produces one particular A. There are many A's that would work, but there's one particular upper triangular one. It's just related to the usual U, but yes, thanks.
No, I only even get into that ballpark if the matrix is symmetric. I don't touch it otherwise. So my matrix is symmetric before I begin. So I know good things about it. And here I'm asking for more. Here I'm asking are the pivots all positive? Are the eigenvalues all positive? So that's more. But I could think of some interpretation that would, for non-symmetric matrices, but it has problems, so I'd rather just leave it. Stay with symmetric. Well that's two hours of lots of linear algebra.
I'm hoping you're going to like the MATLAB problem. Would you like to see what it'll be? I'll just tell you what the equation will be. So it'll be a differential equation. Oh, dear, what is it? So it's a differential equation with a -u'' that we know and love. And what else has it got? Oh yes, right. So here's the problem. Here's the equation. So it has the -u'', the second derivative and it has a first derivative equal whatever. In fact, the example will choose a delta function there. So what am I talking about here? This would be a diffusion and this would be, anybody met these things before? That would be a convection. So that's a first derivative, that's an anti-symmetric. The MATLAB problem is now going to create the difference matrix for that. So the symmetric part will be our old friend K. But now we've got the convection term is appearing. And it's going to be anti-symmetric. And if v is big, it gets more and more important. So what happens? What happens with equations like this? Really this is like the first time in the course that we've allowed this first derivative term to pop up. But nevertheless we can see a lot of what's happening.
And how to deal with those equations? I mean, if you ask a chemical engineer or anybody, they're always dealing with a flow, like the Charles River is flowing along, that's coming from the velocity there, but at the same time stuff is diffusing in it. It's just a constant problem in true, true applications. And this is the best model, I think. So you'll see that and I'm pleased about that. As you'd see. Any last question? I'm always happy. Well I'll see you Friday then. Thanks for coming.