0:00:14 | thank you very much um for the introduction |
---|---|

0:00:16 | yeah so i will talk about um uh something that's i guess a little bit different from the rest of |

0:00:22 | the talks of the session sessions about compressed sensing and i guess sparse representations so this |

0:00:27 | fall |

0:00:28 | a a more under sparse representation and compressed sensing all be talking about the you rank minimization problem |

0:00:35 | um so problem which you have a low-rank matrix |

0:00:38 | and you make a linear measurements of its entries and want to recover the matrix |

0:00:42 | um |

0:00:43 | so you can think of |

0:00:45 | for there being a sparse representation in the sense that it's a low-rank matrix |

0:00:49 | so the degrees of freedom in the matrix is not |

0:00:51 | the number of entries in the mate |

0:00:53 | um um and i want to look at the um nuclear norm minimisation an actually threshold |

0:00:58 | but uh determining thresholds that allow was to recover the matrix a a perfectly |

0:01:03 | joint work with my graduate students some at ten |

0:01:06 | yeah |

0:01:08 | okay so |

0:01:09 | or say a few words about the rank minimisation problem uh uh why it matters and some applications of of |

0:01:15 | where it comes up and the you relaxation |

0:01:18 | as the nuclear norm relaxation |

0:01:21 | um and i'll focus is i said in this talk on trying to predict the um recovery thresholds for this |

0:01:27 | algorithm or mentioned some prior work then i'll |

0:01:30 | i i guess um formally define the problem we want to look at |

0:01:34 | um the main contributions of the paper or are you know |

0:01:38 | type conditions for |

0:01:39 | a recovery and then obtaining a new recovery thresholds and not talk about the all say if you word |

0:01:46 | about the technical details involved in this analysis and the reason why i'm gone only say a few words is |

0:01:51 | because |

0:01:52 | and as all mentioned towards the end after we submit this paper |

0:01:55 | um we obtain some new results which kind of trump every |

0:01:59 | are going to say here today |

0:02:01 | um but i you know i have to say them anyway but i'll tell you something about new results still |

0:02:05 | be presented it to high a T and a couple of months and saint petersburg |

0:02:10 | um so if and you are gonna be there |

0:02:12 | a i may here about them there |

0:02:15 | okay so what is the problem we're looking at so |

0:02:18 | um this is this |

0:02:19 | session i guess on sparse representation and so sparse is become very popular |

0:02:24 | compressed sensing is the can you know |

0:02:26 | um |

0:02:27 | perhaps most popular |

0:02:29 | um problem in this area |

0:02:31 | another situation where you get sparsity is when you have a low-rank matrix |

0:02:36 | okay so |

0:02:37 | if you think of a low-rank matrix |

0:02:39 | um a general matrix that's say |

0:02:42 | uh and one by an two as and one times and two degrees of freedom in it if it's low |

0:02:47 | rank it has much less |

0:02:49 | a roughly the rank times the dimension of the matrix |

0:02:53 | um and so if you think of the matrix in terms of its singular values you know the singular values |

0:02:58 | are sparse |

0:02:59 | if it's a |

0:03:00 | low rank matrix because um most of the entries of the singular values will be zero |

0:03:06 | um this problem comes up in a variety of um um applications the comes up a lot in control |

0:03:12 | a in particular in system identification if you want to |

0:03:16 | if you have measurements of the system and you want to um |

0:03:20 | you know figure out a a low order system that matches that |

0:03:23 | you know you if you look at the hankel matrix the comes out say from your measurements that has to |

0:03:27 | be low rank and so on |

0:03:29 | it comes up in a collaborative filtering problems you may have heard of the net flicks problem |

0:03:34 | comes up there |

0:03:35 | comes up but a lot of learning problems and recently people back to actually looked at you know traditional cs |

0:03:41 | type problems like last graph clustering |

0:03:44 | and that been able to show that you can put it in gennan into a rank |

0:03:48 | a minimisation problem if you look at the adjacency matrix of a graph you want to say clustered to to |

0:03:53 | things are almost clique like |

0:03:56 | uh it essentially boils down to identifying a low rank components |

0:04:00 | in the um |

0:04:02 | a if you will adjacency major |

0:04:04 | it comes up you know a lot of application |

0:04:06 | a let me formally define what it is so we have some matrix X not which is a rank |

0:04:11 | and we're given some measurements of its components |

0:04:14 | and so i'm i'm representing that by this script a |

0:04:17 | and the only thing that matters right no use me for script a is that this is a linear operator |

0:04:22 | so i'm |

0:04:24 | making linear and measurements if you will |

0:04:26 | of the entries of this matrix |

0:04:28 | and you know |

0:04:30 | uh the measurements are such that you know um if this is a and one by and two matrix there's |

0:04:35 | and one times and two unknowns your but i making much fewer measurements |

0:04:39 | nonetheless i want to be able to recover the matrix "'cause" i know what's low right |

0:04:43 | and so the problem that will try to solve this to minimize |

0:04:46 | among all matrices that are consistent with what i measure the lowest rank |

0:04:51 | um again this is a difficult problem as the compressed sensing problem is |

0:04:56 | a a because of the fact the rank function is not convex and so |

0:05:01 | the approach to solve this is to relax |

0:05:03 | the rank |

0:05:05 | with the nuclear norm of X the nuclear norm being the sum of the singular values |

0:05:09 | of X |

0:05:10 | okay so this is similar to |

0:05:12 | compressed sensing where you take an L zero norm and you relax |

0:05:15 | an L one norm |

0:05:16 | of this relaxation was first and i think in that phd thesis of marry "'em" files oh and and ninety |

0:05:23 | nine or two thousand |

0:05:24 | um it goes back to that |

0:05:27 | and you know just as in the L one case the reason why this makes sense because the nuclear norm |

0:05:31 | in some sense |

0:05:32 | tightest convex relaxation |

0:05:35 | for this problem |

0:05:36 | um |

0:05:37 | in the same sense that you know the L one ball is the tightest if you will convex |

0:05:42 | body that contains |

0:05:44 | you know um all of your sparse vectors this is the the the if you look at the nuclear norm |

0:05:49 | ball it's |

0:05:50 | smallest convex part that contains all |

0:05:53 | rank one matrix |

0:05:57 | okay now let me say a few words about the measurements so there's two types of measurements of people have |

0:06:02 | looked at and right minimisation problem |

0:06:04 | one is where you actually is |

0:06:06 | a entries of the a tree |

0:06:07 | a random |

0:06:08 | in this case the problems called matrix completion so you see certain entries of the matrix and you want to |

0:06:14 | uh figure out what the remainder of the entries are |

0:06:18 | um and you know so the net flicks problem certainly falls under this |

0:06:22 | kind of a |

0:06:23 | uh |

0:06:24 | situation |

0:06:25 | uh another type of measurement is when the measurements that you get a are actually inner product |

0:06:31 | of the entries of the matrix time some measurement matrix a a eyes |

0:06:35 | a inner products is really trace of a are |

0:06:37 | that |

0:06:38 | um so you just form linear combinations |

0:06:41 | so each measurement to some linear combination of the entries of a |

0:06:45 | and this is natural actual for example in in quantum tomography more type problems because |

0:06:50 | you are act |

0:06:51 | is typically you know the state of your system and it's a linear combination of a few pure state so |

0:06:57 | it's low right |

0:06:58 | and in quantum the type of measurements you make or of of this type and so it's very natural there |

0:07:02 | X also somewhat actual |

0:07:04 | in some other applications |

0:07:06 | okay so for the matrix completion problem there's an analysis that's been done there's conditions on when you can actually |

0:07:11 | recover from |

0:07:13 | a looking in entries of the matrix it require something called incoherence |

0:07:17 | um and the proofs of this use dual certificates some uh by guess convex optimization technique |

0:07:25 | the inner product measurement these types of measurements are much closer to the types of measurements that we're familiar with |

0:07:32 | and compressed sensing when you do |

0:07:34 | you know and the measurement matrices and so a lot of the compressed sensing results |

0:07:38 | um can be |

0:07:39 | a translated to this |

0:07:41 | a particular case here to the inner product K |

0:07:44 | in particular things like a write P |

0:07:46 | where you have a right be conditions say on your measurement matrix that guarantee |

0:07:50 | but you can recover things in compressed sensing have their analog see here there are alright P type conditions on |

0:07:55 | these they eyes |

0:07:56 | the guarantee that nuclear norm minimisation works |

0:07:59 | likewise similar to the null space property is that we have and compressed sensing there null space property |

0:08:05 | again and all i'll talk about these a bit more |

0:08:09 | um in the |

0:08:10 | yeah i guess in this uh inner product measurement case that allows nuclear norm minimization were |

0:08:16 | um |

0:08:17 | and so it's in fact these that'll be the focus of |

0:08:20 | today |

0:08:22 | okay so so let me say if few words again about prior work uh with regards to a right P |

0:08:28 | um so people as i said that looked at is this and and M means nuclear norm minimisation they've studied |

0:08:33 | it based on a right P |

0:08:35 | in fact the first paper that is it at this is from two thousand seven it's by wrecked files a |

0:08:40 | real oh um |

0:08:41 | and they're are the first row actually look at the |

0:08:44 | nuclear norm minimisation using |

0:08:46 | uh ideas some compressed sensing and they gave some results on when |

0:08:50 | you can actually be cover low rank of made |

0:08:54 | um doesn't isn't planned have recently improved the results |

0:08:58 | again using our i P type results what they've shown |

0:09:01 | is that to recover |

0:09:03 | a a a rank are and by and matrix you need on the order |

0:09:07 | are and |

0:09:08 | a a measurement |

0:09:09 | so if you think of |

0:09:10 | you know a rank |

0:09:11 | are and by N matrix |

0:09:14 | no there's roughly |

0:09:15 | are are and |

0:09:17 | um degrees of freedom in it and so this is saying you need of the same order measurements |

0:09:21 | to be able to recover |

0:09:23 | um and you see will will improve on this results quite a bit |

0:09:28 | um people have also looked at to recovery using that the null space property and all talk more about this |

0:09:34 | um |

0:09:35 | in the null space case it's easiest to do the analysis when your measurements are i i D gal shannon |

0:09:40 | oh i'll talk about that a bit more |

0:09:43 | and |

0:09:43 | what people of observed one they actually run algorithms like the similar to the compressed sensing case there's of |

0:09:48 | type phase transitions so |

0:09:51 | if you're fixing the number of measurements |

0:09:53 | you're making and if the dimensions of your matrix or fixed and everything is large |

0:09:57 | you know as you increase the rank |

0:10:00 | some phase transition beyond which you can never recover |

0:10:03 | and below which you almost always recover and so |

0:10:06 | a what we want to do is to see if we can find analytically what that |

0:10:09 | as transition is |

0:10:11 | um |

0:10:12 | okay |

0:10:13 | and so |

0:10:14 | a a a a a a few years back with better act where you shall we tempted |

0:10:19 | um to determine this phase transition and i'll show you some of the curves later in terms what we did |

0:10:24 | that |

0:10:25 | um um |

0:10:27 | and so what we're trying to do in this paper was to extend a lot of the results that exist |

0:10:32 | conventional |

0:10:33 | compressed sensing and so |

0:10:35 | um donoho again a few years back |

0:10:38 | um for conventional compressed sensing when you have i i D gal should measurements |

0:10:43 | determined |

0:10:44 | um what these phase transitions were |

0:10:47 | um and um use the grassmann angle approach to are all talk more about this in a moment but the |

0:10:53 | idea is that you can recover |

0:10:55 | signals if the null space if you will of your measurement matrix |

0:11:00 | um |

0:11:01 | lives outside certain polytope sent to to guarantee that they'll about side those certain polytope to to |

0:11:06 | compute the grasp an angle and so |

0:11:09 | uh the difficulty in the matrix problem is |

0:11:13 | that the null space condition no longer gives you a polytope but something nonlinear and so this approach does not |

0:11:19 | stand which is why we had difficulties here and |

0:11:22 | or or get into those details of what why why things are much more complicated to |

0:11:28 | okay |

0:11:28 | um let me say something think about the uh |

0:11:32 | types of recovery |

0:11:34 | is that we want so will introduce three notions of recovery |

0:11:37 | there's a strong notion |

0:11:39 | oh if i have a collection of i have a measurement matrix |

0:11:43 | also say you know i have strong recovery ability |

0:11:46 | if i can recover any low-rank right matrix up to a certain rank |

0:11:50 | okay |

0:11:51 | um if i can recover all matrices a call that's strong |

0:11:55 | i'll call sectional recovery ability if it's not for all low-rank matrices |

0:12:00 | but all matrices |

0:12:02 | um who was |

0:12:04 | column and row spaces are fixed so are looking at low right matrices whose column and row |

0:12:09 | roll us fans are fit |

0:12:12 | rather than that it's are so if i could cover all of those are call it's section sectional |

0:12:16 | we recover ability is a recover of a particular low right matrix of i give you a low rank matrix |

0:12:22 | if i can recover it all call it week |

0:12:25 | and so strong if you think of it applies |

0:12:28 | to being able to recover all matrices |

0:12:30 | we can some sense |

0:12:32 | a a means that i'm able to recover almost all my |

0:12:36 | in fact when you do simulations what you actually encounter is the week one "'cause" you could never check whether |

0:12:42 | you recovering every matrix |

0:12:43 | site |

0:12:44 | um but this is |

0:12:45 | the the transitions that you observed in in practice not sure you curves toward the ends of the talk all |

0:12:51 | correspond to the week special |

0:12:53 | again this have natural counterparts and compressed sensing that people of an |

0:12:58 | um and so what i'll do in this paper is i'm gonna if you strong |

0:13:01 | uh recovery conditions |

0:13:04 | uh for all three of these that improve on what's there in the literature |

0:13:08 | and based on the were able to actually |

0:13:10 | improve on |

0:13:12 | um bounds on the phase transition |

0:13:16 | okay so let's go to this |

0:13:17 | a little bit more detail |

0:13:19 | um so again so the the measurements that we're looking at their some linear mapping so they take |

0:13:24 | and and one by and two matrix that's oh right matrix i'm trying to |

0:13:29 | recover |

0:13:30 | and maps it to some and measurements and |

0:13:32 | here represents the number of measure my |

0:13:35 | and so am is a lot less than say and one times and two are not seeing every entry of |

0:13:39 | matrix a making fewer measurements but i want to recover a given that i don't things are |

0:13:43 | low rank |

0:13:44 | and again here |

0:13:45 | the way i am going to you these measurements is that each measurement |

0:13:49 | is obtained by an inner product with an i i yours |

0:13:53 | "'kay" |

0:13:54 | and so what we mean my strong recovery ability which i alluded to on the previous slide |

0:13:59 | this strong recovery threshold is the largest |

0:14:03 | number or are for the right of the matrix so that all your and read matrices with rank R |

0:14:08 | can be recovered from the new norm relax ish |

0:14:11 | okay |

0:14:12 | um and so clearly |

0:14:14 | and one an N two goes up you |

0:14:15 | imagine the right will go up |

0:14:18 | um |

0:14:19 | uh and so it's a |

0:14:21 | but actually the other way around as and one and two goes of the right goes down because the from |

0:14:26 | fixing the number of measurements a more degrees of freedom |

0:14:29 | and if i increase and um of course are will |

0:14:33 | and so here's the theorem |

0:14:35 | um |

0:14:37 | and it's actually this there um that is of this to to analyse things better that |

0:14:41 | in in in the practise of this there is if and only if it says |

0:14:44 | any matrix of rank at most are |

0:14:46 | can be recovered via nuclear norm minimisation by of this if and only if |

0:14:52 | um it's an null space condition so for any |

0:14:55 | matrix W that's in the null space of this operator so |

0:14:59 | script a applied to W Z able to zero so any matrix in the null space again this is a |

0:15:04 | linear greater |

0:15:05 | if the following property holds for all W than as i said it's if and on if and and and |

0:15:10 | what that property is is that you look at W |

0:15:13 | you compute its singular value |

0:15:16 | order order them and if this some of the first are are are uh where R is the right you're |

0:15:21 | looking for is less than the sum of the rest |

0:15:24 | then that's the condition |

0:15:25 | and we need it for every double |

0:15:28 | uh again those of you might be familiar with the um no space condition in compressed sensing this is very |

0:15:35 | much the analogue of that in compressed sensing again you look at the |

0:15:38 | null space of your measurement matrix |

0:15:41 | and you say you the condition is the absolute value of the are big |

0:15:45 | should be less than the absolute value of the sum of the remainder trees and so what happens in the |

0:15:50 | matrix case is the absolute value gets replaced by |

0:15:55 | that's all that happens here |

0:15:57 | the difficulty is that in the compressed sensing K |

0:16:00 | those inequalities is that you got |

0:16:03 | essentially give you a polytope because they're all if you will linear |

0:16:07 | in W here it's much more complicated because of the presence of this |

0:16:12 | about |

0:16:12 | so these inequalities none of them are |

0:16:16 | uh |

0:16:18 | okay and so there's a pro for this which is in the paper but i won't go into it's based |

0:16:21 | on some |

0:16:22 | majorization zero |

0:16:24 | and again it's really this condition that's allowed us to improve the |

0:16:28 | there there was no |

0:16:29 | okay |

0:16:30 | if you look at this section all case so |

0:16:33 | the you recovery for in in the sectional sense is that |

0:16:36 | you know so what you do is you P |

0:16:39 | um sub-spaces as S P and ask Q |

0:16:42 | of say dimension R |

0:16:44 | and you defined projection operators on to these two sub-spaces and you say something has a |

0:16:49 | uh sectional recovery if all right are matrices whose columns spans rest P S Q can be cover |

0:16:57 | um um and then there's a condition again if and only F for that which |

0:17:00 | um you can you look at the null space of your measurement it's W when you require part this |

0:17:05 | to all the nuclear norm of P W Q where P and Q are these projection matrices should be less |

0:17:13 | that's a again it's an if and only if condition |

0:17:15 | and then finally the week |

0:17:17 | um threshold is for an arbitrary X not you want |

0:17:20 | it to be recovered |

0:17:22 | up to rank are again through a nuclear norm minimisation |

0:17:27 | um and the if and only if condition here is the following so you take your X not you do |

0:17:31 | the S D |

0:17:32 | and basically require a for everything and then all space |

0:17:36 | at this condition holds it's to trace condition person of here |

0:17:40 | um |

0:17:41 | so that's what we need to analyse |

0:17:43 | and the difficulty in analysing this is that you need for example these conditions to hold for every element W |

0:17:49 | in the null space of your measure |

0:17:52 | now when you're measurement matrix is yeah our it turns out that the |

0:17:55 | elements in the null space also get should distributed |

0:17:59 | um and so you kind of want to condition to this hold for all gal oceans in in that null |

0:18:04 | space and that's |

0:18:06 | where the difficult |

0:18:08 | um |

0:18:09 | uh again again just introduce a some notation for i show you the main result and the curve so again |

0:18:14 | the matrix is gonna be and one times and two |

0:18:17 | i'll take N one less than N two |

0:18:19 | um |

0:18:20 | and so |

0:18:21 | the aspect ratio think of that as being the quantity a alpha |

0:18:24 | i'm gonna look at a rank |

0:18:26 | that |

0:18:27 | say data times and one where and one is a small of a small once a bit is less than |

0:18:31 | one L was less than one |

0:18:33 | and then again the number of measurements i'm going to make is proportional to the size of the matrix and |

0:18:38 | one and two and again the proportionality less than one which would be know |

0:18:42 | and so what i want to do for example is i'm going to say |

0:18:46 | and one and two and you know the number of measurements i'm making an i want to see how big |

0:18:50 | are |

0:18:51 | get that will be the the threshold |

0:18:53 | and to describe a threshold i need this quantity |

0:18:56 | gamma of alpha and beta which are parameters of the |

0:18:59 | problem |

0:19:00 | uh and it's basically defined is this quantity here |

0:19:04 | expected value of the sum of the first alpha beta and |

0:19:08 | um singular values of some i i D J made |

0:19:11 | with this normalisation becomes a |

0:19:14 | yeah |

0:19:14 | a quantity independent of that |

0:19:16 | okay so that's what this |

0:19:18 | a and it can be computed at give the expression here but |

0:19:21 | a standard random the matrix there will allow you to the their close form expression |

0:19:26 | um and so there's is of the results so we have |

0:19:30 | um bounds on the threshold this is for the week sectional and strong threshold and there in terms of alpha |

0:19:35 | beta |

0:19:36 | um and the gamma function |

0:19:40 | okay so let's look at |

0:19:41 | what these are |

0:19:42 | thresholds threshold look like |

0:19:44 | so what i thought it here |

0:19:46 | um so here is the number of samples i'm making so if if i might sit point five |

0:19:50 | that means i'm i'm making the number of measurements |

0:19:53 | half the |

0:19:54 | a a number of entries in the matrix |

0:19:57 | what i have |

0:19:57 | here so with point five five getting something around points it |

0:20:02 | uh this is a surrogate for the ranks what it really means is you know it's it's the inverse of |

0:20:06 | the oversampling sampling at eight point five |

0:20:08 | multiplied by point six a gives a point three which means i can |

0:20:12 | recover matrices up to |

0:20:14 | right point three and |

0:20:15 | so this but they should have said is for square measure |

0:20:18 | some taking and twenty four ten |

0:20:20 | and looking at point to and here it's point five |

0:20:22 | i have to multiply this it |

0:20:24 | point want it so it's saying if i observe one fifth of the entries of the matrix |

0:20:28 | for for make that many measurements |

0:20:30 | i can go up to ranks that are one ten |

0:20:33 | time |

0:20:34 | that's |

0:20:35 | this threshold that we're trying to predict |

0:20:37 | and that their rooms that i just showed you |

0:20:39 | do the following predictions |

0:20:41 | so that's the weak threshold |

0:20:43 | this is the section of this is |

0:20:45 | strong and just to show you |

0:20:47 | how it improves on the previous results this was the previous |

0:20:51 | a strong threshold and that that the dotted line which matches are section was the previous |

0:20:56 | a weak threshold so we've improve them quite a bit but still |

0:21:00 | it's |

0:21:00 | quite a bit off |

0:21:01 | um but it least in this regime |

0:21:03 | it works well |

0:21:04 | um |

0:21:05 | and they get you know i i won't go into that the the technical details of this because this is |

0:21:10 | that these results were all trumped |

0:21:12 | but basically the difficulty |

0:21:14 | in analysing this problem is that you have pointed is like this that you need to bound for all W |

0:21:20 | it's little the way we did that was we |

0:21:23 | essentially bound the expected value |

0:21:25 | the E |

0:21:26 | and then used some concentration of measure type results so if you can on the expected value of things like |

0:21:32 | the |

0:21:33 | and there's there and to do that |

0:21:35 | then if you can prove which it's S and you use concentration inequalities of a lot of hard work |

0:21:40 | yeah style |

0:21:41 | but as i said after we set made it this paper |

0:21:45 | we came up with another analysis |

0:21:48 | um um and that analysis essentially gives this |

0:21:50 | these |

0:21:51 | solid curves that's the weak threshold this is the weak threshold of the current paper |

0:21:56 | this is the sectional threshold this is the current paper this is a strong in this |

0:21:59 | and paper |

0:22:00 | and pretty much this is on the money now |

0:22:03 | um so we're actually able to analytically project |

0:22:06 | a threshold |

0:22:07 | it |

0:22:08 | um |

0:22:10 | it's not this paper it's the i at paper |

0:22:12 | i'll just make a little bit of a |

0:22:15 | um advertisement for where this result comes from |

0:22:18 | the difficulty in what we're trying to do is we're trying to extend a close |

0:22:22 | approach approach to computing compressed sensing thresholds and those are difficult to do "'cause" it's based on grass an angles |

0:22:28 | which you can compute in principle for polytope would for these nonlinear objects are much harder to do |

0:22:34 | but you know student of mine against advertise for his |

0:22:37 | work of a couple years back |

0:22:40 | came up with an alternative approach for compressed sensing based on an escape through mesh |

0:22:45 | type idea to compute L one optimization threshold |

0:22:49 | and the good news is is this is analysis |

0:22:52 | direct but it stands |

0:22:54 | to |

0:22:55 | uh the matrix |

0:22:56 | case and allows a |

0:22:57 | to basically |

0:22:58 | um |

0:22:59 | compute the threshold is that and and the results are are are you know |

0:23:03 | analytical and i'll give you a sense of the so if you look at the strong recovery |

0:23:08 | the number of measurements |

0:23:09 | has to satisfy this |

0:23:11 | so for example in the square case one and one is equal to and two |

0:23:14 | this becomes sixteen or |

0:23:16 | okay |

0:23:17 | now if i have a rank R matrix |

0:23:20 | roughly there's are and |

0:23:22 | entries in and |

0:23:24 | a two are N |

0:23:25 | to be a specific and so sixteen are and is eight times that so you need eight times |

0:23:30 | more measurements than the right |

0:23:32 | strong recovery threshold |

0:23:34 | and that's why here what you see this point here that |

0:23:37 | is a strong curve it's one over eight |

0:23:40 | um |

0:23:42 | or they weak recovery this is what you get a and one and one is equal to to this becomes |

0:23:46 | six R and and so you're three times more than the number of |

0:23:51 | degrees of freedom of rank R matrix |

0:23:54 | and um again this point here is one third |

0:23:58 | um so i guess i'll stop here i mean so i i explain with these thresholds were and um we |

0:24:02 | had results but as i said you know we can now |

0:24:05 | these |

0:24:06 | exactly and so it it's a |

0:24:08 | so stuff |

0:24:15 | can |

0:24:16 | bit of i think that |

0:24:21 | the so um first have to have made i'm i'm really and a a medium with things |

0:24:26 | um and that |

0:24:28 | did i got to correctly yeah mean |

0:24:31 | and if i give you matrix |

0:24:33 | and |

0:24:34 | if that if you have um and measurement operate that it satisfies nice properties |

0:24:39 | you can really actually we caff but really X like increase in the matrix four |

0:24:44 | interest |

0:24:45 | that's what the the of these rank minimization problem do so |

0:24:49 | as i said there are a lot of the compressed sensing where you have a sparse signal when you make |

0:24:54 | measurements of the entries and these of when you're combinations you can recover |

0:24:58 | a signal |

0:24:59 | um because you have sparsity in here if you have a low rank |

0:25:03 | and you make |

0:25:04 | measurements you know you see certain entries trees are a linear combination |

0:25:07 | certain trees you can recover the entire matrix by to the fact that low |

0:25:13 | thank |

0:25:16 | hmmm |

0:25:17 | huh |

0:25:18 | oh thing |

0:25:20 | yeah so the analysis for galician in so even in the |

0:25:23 | um compressed sensing case |

0:25:26 | um where you can do exact analysis for the the threshold is only in the gash she case now |

0:25:31 | what people of there |

0:25:33 | it is |

0:25:35 | um |

0:25:36 | so for example if you take an i i D or should matrix you can compute threshold that's with donna |

0:25:40 | who did |

0:25:41 | um if you use a tape |

0:25:44 | not not i i yeah or shouldn't it's say are i E burn will these are other types of stuff |

0:25:48 | and do our one optimization people observe |

0:25:51 | aim |

0:25:52 | um phase transition at the same point |

0:25:54 | even you know if you look at sparse measurement matrices things like expander graphs and so on |

0:25:59 | they seem to exhibit the same thing |

0:26:01 | uh and Y that is not fully understood there's a paper or um in the proceedings of the ieee by |

0:26:07 | don a where he talks about this |

0:26:08 | the various thresholds and this |

0:26:10 | universal looking behavior so we don't fully understand but it |

0:26:14 | appears to so the analysis for the gal should is doable |

0:26:18 | but it appears that you know the result is more universal |

0:26:21 | and um we haven't done as much uh investigation for the uh |

0:26:27 | matrix |

0:26:28 | rank minimization problem is the compressed sensing his roughly we've seen the same |

0:26:37 | um |

0:26:38 | you you propose now and |

0:26:40 | uh but i i one ask |

0:26:43 | um |

0:26:44 | if it is possible to find addition |

0:26:48 | in |

0:26:49 | these the rooms |

0:26:50 | i i promising master |

0:26:53 | i no i mean is so this is similar again you know so if you're familiar with |

0:26:58 | um |

0:26:59 | uh a even in the compressed sensing case when you look at the |

0:27:03 | so there if and only if conditions for where and you can actually recover a sparse signal and its again |

0:27:09 | in terms of the now the the null space conditions you have to look at |

0:27:12 | every vector in the null space of your measurement matrix |

0:27:15 | and it's so those in principle cannot be |

0:27:19 | check yeah |

0:27:19 | it it easy to show that it's and be hard to do so |

0:27:22 | here it's even more so because you're null space is is is a tree |

0:27:26 | um which is the reason why i when people want to analyse these you go to these ensemble of of |

0:27:32 | measurement matrices "'cause" there you can say that this property holds with |

0:27:36 | very high probability |

0:27:38 | uh but you the quick answer is that no the mean if you give me a matrix |

0:27:41 | there's no way i can show |

0:27:46 | a special happened here the matrix to be reconstructed hands |

0:27:51 | particular symmetry |

0:27:53 | well yes um so if if it's |

0:27:55 | symmetric |

0:27:56 | um |

0:27:57 | uh uh it yes i i i did that there and but roughly what happens is the threshold |

0:28:03 | get a by half |

0:28:05 | so certainly at a very end it does so |

0:28:08 | um if you're matrix is symmetric |

0:28:10 | basically here |

0:28:12 | that to goes away that for them |

0:28:14 | and if you have a was still what's for local toeplitz case i don't know but um |

0:28:20 | for example if you know the matrix is um non-negative definite |

0:28:24 | then it again improve so you need fewer pressure |

0:28:27 | um toeplitz i don't know |

0:28:29 | um |

0:28:31 | but that's an interesting question because |

0:28:33 | yeah uh if you look at the system i D problem which is an important one you be doing it |

0:28:37 | for a hankel matrix which them |

0:28:40 | a but look at the room like to question is is the if it if you know which all positive |

0:28:44 | but |

0:28:44 | that is that what every entry being positive |

0:28:49 | i haven't looked at that we we've analyse the positive |

0:28:52 | semi-definite case but i haven't looked at the case with the matrix as well |

0:28:57 | um |

0:28:59 | no |

0:29:03 | okay |

0:29:03 | thank you so much you can |