0:00:14thank you very much um for the introduction
0:00:16yeah so i will talk about um uh something that's i guess a little bit different from the rest of
0:00:22the talks of the session sessions about compressed sensing and i guess sparse representations so this
0:00:27fall
0:00:28a a more under sparse representation and compressed sensing all be talking about the you rank minimization problem
0:00:35um so problem which you have a low-rank matrix
0:00:38and you make a linear measurements of its entries and want to recover the matrix
0:00:42um
0:00:43so you can think of
0:00:45for there being a sparse representation in the sense that it's a low-rank matrix
0:00:49so the degrees of freedom in the matrix is not
0:00:51the number of entries in the mate
0:00:53um um and i want to look at the um nuclear norm minimisation an actually threshold
0:00:58but uh determining thresholds that allow was to recover the matrix a a perfectly
0:01:03joint work with my graduate students some at ten
0:01:06yeah
0:01:08okay so
0:01:09or say a few words about the rank minimisation problem uh uh why it matters and some applications of of
0:01:15where it comes up and the you relaxation
0:01:18as the nuclear norm relaxation
0:01:21um and i'll focus is i said in this talk on trying to predict the um recovery thresholds for this
0:01:27algorithm or mentioned some prior work then i'll
0:01:30i i guess um formally define the problem we want to look at
0:01:34um the main contributions of the paper or are you know
0:01:38type conditions for
0:01:39a recovery and then obtaining a new recovery thresholds and not talk about the all say if you word
0:01:46about the technical details involved in this analysis and the reason why i'm gone only say a few words is
0:01:51because
0:01:52and as all mentioned towards the end after we submit this paper
0:01:55um we obtain some new results which kind of trump every
0:01:59are going to say here today
0:02:01um but i you know i have to say them anyway but i'll tell you something about new results still
0:02:05be presented it to high a T and a couple of months and saint petersburg
0:02:10um so if and you are gonna be there
0:02:12a i may here about them there
0:02:15okay so what is the problem we're looking at so
0:02:18um this is this
0:02:19session i guess on sparse representation and so sparse is become very popular
0:02:24compressed sensing is the can you know
0:02:26um
0:02:27perhaps most popular
0:02:29um problem in this area
0:02:31another situation where you get sparsity is when you have a low-rank matrix
0:02:36okay so
0:02:37if you think of a low-rank matrix
0:02:39um a general matrix that's say
0:02:42uh and one by an two as and one times and two degrees of freedom in it if it's low
0:02:47rank it has much less
0:02:49a roughly the rank times the dimension of the matrix
0:02:53um and so if you think of the matrix in terms of its singular values you know the singular values
0:02:58are sparse
0:02:59if it's a
0:03:00low rank matrix because um most of the entries of the singular values will be zero
0:03:06um this problem comes up in a variety of um um applications the comes up a lot in control
0:03:12a in particular in system identification if you want to
0:03:16if you have measurements of the system and you want to um
0:03:20you know figure out a a low order system that matches that
0:03:23you know you if you look at the hankel matrix the comes out say from your measurements that has to
0:03:27be low rank and so on
0:03:29it comes up in a collaborative filtering problems you may have heard of the net flicks problem
0:03:34comes up there
0:03:35comes up but a lot of learning problems and recently people back to actually looked at you know traditional cs
0:03:41type problems like last graph clustering
0:03:44and that been able to show that you can put it in gennan into a rank
0:03:48a minimisation problem if you look at the adjacency matrix of a graph you want to say clustered to to
0:03:53things are almost clique like
0:03:56uh it essentially boils down to identifying a low rank components
0:04:00in the um
0:04:02a if you will adjacency major
0:04:04it comes up you know a lot of application
0:04:06a let me formally define what it is so we have some matrix X not which is a rank
0:04:11and we're given some measurements of its components
0:04:14and so i'm i'm representing that by this script a
0:04:17and the only thing that matters right no use me for script a is that this is a linear operator
0:04:22so i'm
0:04:24making linear and measurements if you will
0:04:26of the entries of this matrix
0:04:28and you know
0:04:30uh the measurements are such that you know um if this is a and one by and two matrix there's
0:04:35and one times and two unknowns your but i making much fewer measurements
0:04:39nonetheless i want to be able to recover the matrix "'cause" i know what's low right
0:04:43and so the problem that will try to solve this to minimize
0:04:46among all matrices that are consistent with what i measure the lowest rank
0:04:51um again this is a difficult problem as the compressed sensing problem is
0:04:56a a because of the fact the rank function is not convex and so
0:05:01the approach to solve this is to relax
0:05:03the rank
0:05:05with the nuclear norm of X the nuclear norm being the sum of the singular values
0:05:09of X
0:05:10okay so this is similar to
0:05:12compressed sensing where you take an L zero norm and you relax
0:05:15an L one norm
0:05:16of this relaxation was first and i think in that phd thesis of marry "'em" files oh and and ninety
0:05:23nine or two thousand
0:05:24um it goes back to that
0:05:27and you know just as in the L one case the reason why this makes sense because the nuclear norm
0:05:31in some sense
0:05:32tightest convex relaxation
0:05:35for this problem
0:05:36um
0:05:37in the same sense that you know the L one ball is the tightest if you will convex
0:05:42body that contains
0:05:44you know um all of your sparse vectors this is the the the if you look at the nuclear norm
0:05:49ball it's
0:05:50smallest convex part that contains all
0:05:53rank one matrix
0:05:57okay now let me say a few words about the measurements so there's two types of measurements of people have
0:06:02looked at and right minimisation problem
0:06:04one is where you actually is
0:06:06a entries of the a tree
0:06:07a random
0:06:08in this case the problems called matrix completion so you see certain entries of the matrix and you want to
0:06:14uh figure out what the remainder of the entries are
0:06:18um and you know so the net flicks problem certainly falls under this
0:06:22kind of a
0:06:23uh
0:06:24situation
0:06:25uh another type of measurement is when the measurements that you get a are actually inner product
0:06:31of the entries of the matrix time some measurement matrix a a eyes
0:06:35a inner products is really trace of a are
0:06:37that
0:06:38um so you just form linear combinations
0:06:41so each measurement to some linear combination of the entries of a
0:06:45and this is natural actual for example in in quantum tomography more type problems because
0:06:50you are act
0:06:51is typically you know the state of your system and it's a linear combination of a few pure state so
0:06:57it's low right
0:06:58and in quantum the type of measurements you make or of of this type and so it's very natural there
0:07:02X also somewhat actual
0:07:04in some other applications
0:07:06okay so for the matrix completion problem there's an analysis that's been done there's conditions on when you can actually
0:07:11recover from
0:07:13a looking in entries of the matrix it require something called incoherence
0:07:17um and the proofs of this use dual certificates some uh by guess convex optimization technique
0:07:25the inner product measurement these types of measurements are much closer to the types of measurements that we're familiar with
0:07:32and compressed sensing when you do
0:07:34you know and the measurement matrices and so a lot of the compressed sensing results
0:07:38um can be
0:07:39a translated to this
0:07:41a particular case here to the inner product K
0:07:44in particular things like a write P
0:07:46where you have a right be conditions say on your measurement matrix that guarantee
0:07:50but you can recover things in compressed sensing have their analog see here there are alright P type conditions on
0:07:55these they eyes
0:07:56the guarantee that nuclear norm minimisation works
0:07:59likewise similar to the null space property is that we have and compressed sensing there null space property
0:08:05again and all i'll talk about these a bit more
0:08:09um in the
0:08:10yeah i guess in this uh inner product measurement case that allows nuclear norm minimization were
0:08:16um
0:08:17and so it's in fact these that'll be the focus of
0:08:20today
0:08:22okay so so let me say if few words again about prior work uh with regards to a right P
0:08:28um so people as i said that looked at is this and and M means nuclear norm minimisation they've studied
0:08:33it based on a right P
0:08:35in fact the first paper that is it at this is from two thousand seven it's by wrecked files a
0:08:40real oh um
0:08:41and they're are the first row actually look at the
0:08:44nuclear norm minimisation using
0:08:46uh ideas some compressed sensing and they gave some results on when
0:08:50you can actually be cover low rank of made
0:08:54um doesn't isn't planned have recently improved the results
0:08:58again using our i P type results what they've shown
0:09:01is that to recover
0:09:03a a a rank are and by and matrix you need on the order
0:09:07are and
0:09:08a a measurement
0:09:09so if you think of
0:09:10you know a rank
0:09:11are and by N matrix
0:09:14no there's roughly
0:09:15are are and
0:09:17um degrees of freedom in it and so this is saying you need of the same order measurements
0:09:21to be able to recover
0:09:23um and you see will will improve on this results quite a bit
0:09:28um people have also looked at to recovery using that the null space property and all talk more about this
0:09:34um
0:09:35in the null space case it's easiest to do the analysis when your measurements are i i D gal shannon
0:09:40oh i'll talk about that a bit more
0:09:43and
0:09:43what people of observed one they actually run algorithms like the similar to the compressed sensing case there's of
0:09:48type phase transitions so
0:09:51if you're fixing the number of measurements
0:09:53you're making and if the dimensions of your matrix or fixed and everything is large
0:09:57you know as you increase the rank
0:10:00some phase transition beyond which you can never recover
0:10:03and below which you almost always recover and so
0:10:06a what we want to do is to see if we can find analytically what that
0:10:09as transition is
0:10:11um
0:10:12okay
0:10:13and so
0:10:14a a a a a a few years back with better act where you shall we tempted
0:10:19um to determine this phase transition and i'll show you some of the curves later in terms what we did
0:10:24that
0:10:25um um
0:10:27and so what we're trying to do in this paper was to extend a lot of the results that exist
0:10:32conventional
0:10:33compressed sensing and so
0:10:35um donoho again a few years back
0:10:38um for conventional compressed sensing when you have i i D gal should measurements
0:10:43determined
0:10:44um what these phase transitions were
0:10:47um and um use the grassmann angle approach to are all talk more about this in a moment but the
0:10:53idea is that you can recover
0:10:55signals if the null space if you will of your measurement matrix
0:11:00um
0:11:01lives outside certain polytope sent to to guarantee that they'll about side those certain polytope to to
0:11:06compute the grasp an angle and so
0:11:09uh the difficulty in the matrix problem is
0:11:13that the null space condition no longer gives you a polytope but something nonlinear and so this approach does not
0:11:19stand which is why we had difficulties here and
0:11:22or or get into those details of what why why things are much more complicated to
0:11:28okay
0:11:28um let me say something think about the uh
0:11:32types of recovery
0:11:34is that we want so will introduce three notions of recovery
0:11:37there's a strong notion
0:11:39oh if i have a collection of i have a measurement matrix
0:11:43also say you know i have strong recovery ability
0:11:46if i can recover any low-rank right matrix up to a certain rank
0:11:50okay
0:11:51um if i can recover all matrices a call that's strong
0:11:55i'll call sectional recovery ability if it's not for all low-rank matrices
0:12:00but all matrices
0:12:02um who was
0:12:04column and row spaces are fixed so are looking at low right matrices whose column and row
0:12:09roll us fans are fit
0:12:12rather than that it's are so if i could cover all of those are call it's section sectional
0:12:16we recover ability is a recover of a particular low right matrix of i give you a low rank matrix
0:12:22if i can recover it all call it week
0:12:25and so strong if you think of it applies
0:12:28to being able to recover all matrices
0:12:30we can some sense
0:12:32a a means that i'm able to recover almost all my
0:12:36in fact when you do simulations what you actually encounter is the week one "'cause" you could never check whether
0:12:42you recovering every matrix
0:12:43site
0:12:44um but this is
0:12:45the the transitions that you observed in in practice not sure you curves toward the ends of the talk all
0:12:51correspond to the week special
0:12:53again this have natural counterparts and compressed sensing that people of an
0:12:58um and so what i'll do in this paper is i'm gonna if you strong
0:13:01uh recovery conditions
0:13:04uh for all three of these that improve on what's there in the literature
0:13:08and based on the were able to actually
0:13:10improve on
0:13:12um bounds on the phase transition
0:13:16okay so let's go to this
0:13:17a little bit more detail
0:13:19um so again so the the measurements that we're looking at their some linear mapping so they take
0:13:24and and one by and two matrix that's oh right matrix i'm trying to
0:13:29recover
0:13:30and maps it to some and measurements and
0:13:32here represents the number of measure my
0:13:35and so am is a lot less than say and one times and two are not seeing every entry of
0:13:39matrix a making fewer measurements but i want to recover a given that i don't things are
0:13:43low rank
0:13:44and again here
0:13:45the way i am going to you these measurements is that each measurement
0:13:49is obtained by an inner product with an i i yours
0:13:53"'kay"
0:13:54and so what we mean my strong recovery ability which i alluded to on the previous slide
0:13:59this strong recovery threshold is the largest
0:14:03number or are for the right of the matrix so that all your and read matrices with rank R
0:14:08can be recovered from the new norm relax ish
0:14:11okay
0:14:12um and so clearly
0:14:14and one an N two goes up you
0:14:15imagine the right will go up
0:14:18um
0:14:19uh and so it's a
0:14:21but actually the other way around as and one and two goes of the right goes down because the from
0:14:26fixing the number of measurements a more degrees of freedom
0:14:29and if i increase and um of course are will
0:14:33and so here's the theorem
0:14:35um
0:14:37and it's actually this there um that is of this to to analyse things better that
0:14:41in in in the practise of this there is if and only if it says
0:14:44any matrix of rank at most are
0:14:46can be recovered via nuclear norm minimisation by of this if and only if
0:14:52um it's an null space condition so for any
0:14:55matrix W that's in the null space of this operator so
0:14:59script a applied to W Z able to zero so any matrix in the null space again this is a
0:15:04linear greater
0:15:05if the following property holds for all W than as i said it's if and on if and and and
0:15:10what that property is is that you look at W
0:15:13you compute its singular value
0:15:16order order them and if this some of the first are are are uh where R is the right you're
0:15:21looking for is less than the sum of the rest
0:15:24then that's the condition
0:15:25and we need it for every double
0:15:28uh again those of you might be familiar with the um no space condition in compressed sensing this is very
0:15:35much the analogue of that in compressed sensing again you look at the
0:15:38null space of your measurement matrix
0:15:41and you say you the condition is the absolute value of the are big
0:15:45should be less than the absolute value of the sum of the remainder trees and so what happens in the
0:15:50matrix case is the absolute value gets replaced by
0:15:55that's all that happens here
0:15:57the difficulty is that in the compressed sensing K
0:16:00those inequalities is that you got
0:16:03essentially give you a polytope because they're all if you will linear
0:16:07in W here it's much more complicated because of the presence of this
0:16:12about
0:16:12so these inequalities none of them are
0:16:16uh
0:16:18okay and so there's a pro for this which is in the paper but i won't go into it's based
0:16:21on some
0:16:22majorization zero
0:16:24and again it's really this condition that's allowed us to improve the
0:16:28there there was no
0:16:29okay
0:16:30if you look at this section all case so
0:16:33the you recovery for in in the sectional sense is that
0:16:36you know so what you do is you P
0:16:39um sub-spaces as S P and ask Q
0:16:42of say dimension R
0:16:44and you defined projection operators on to these two sub-spaces and you say something has a
0:16:49uh sectional recovery if all right are matrices whose columns spans rest P S Q can be cover
0:16:57um um and then there's a condition again if and only F for that which
0:17:00um you can you look at the null space of your measurement it's W when you require part this
0:17:05to all the nuclear norm of P W Q where P and Q are these projection matrices should be less
0:17:13that's a again it's an if and only if condition
0:17:15and then finally the week
0:17:17um threshold is for an arbitrary X not you want
0:17:20it to be recovered
0:17:22up to rank are again through a nuclear norm minimisation
0:17:27um and the if and only if condition here is the following so you take your X not you do
0:17:31the S D
0:17:32and basically require a for everything and then all space
0:17:36at this condition holds it's to trace condition person of here
0:17:40um
0:17:41so that's what we need to analyse
0:17:43and the difficulty in analysing this is that you need for example these conditions to hold for every element W
0:17:49in the null space of your measure
0:17:52now when you're measurement matrix is yeah our it turns out that the
0:17:55elements in the null space also get should distributed
0:17:59um and so you kind of want to condition to this hold for all gal oceans in in that null
0:18:04space and that's
0:18:06where the difficult
0:18:08um
0:18:09uh again again just introduce a some notation for i show you the main result and the curve so again
0:18:14the matrix is gonna be and one times and two
0:18:17i'll take N one less than N two
0:18:19um
0:18:20and so
0:18:21the aspect ratio think of that as being the quantity a alpha
0:18:24i'm gonna look at a rank
0:18:26that
0:18:27say data times and one where and one is a small of a small once a bit is less than
0:18:31one L was less than one
0:18:33and then again the number of measurements i'm going to make is proportional to the size of the matrix and
0:18:38one and two and again the proportionality less than one which would be know
0:18:42and so what i want to do for example is i'm going to say
0:18:46and one and two and you know the number of measurements i'm making an i want to see how big
0:18:50are
0:18:51get that will be the the threshold
0:18:53and to describe a threshold i need this quantity
0:18:56gamma of alpha and beta which are parameters of the
0:18:59problem
0:19:00uh and it's basically defined is this quantity here
0:19:04expected value of the sum of the first alpha beta and
0:19:08um singular values of some i i D J made
0:19:11with this normalisation becomes a
0:19:14yeah
0:19:14a quantity independent of that
0:19:16okay so that's what this
0:19:18a and it can be computed at give the expression here but
0:19:21a standard random the matrix there will allow you to the their close form expression
0:19:26um and so there's is of the results so we have
0:19:30um bounds on the threshold this is for the week sectional and strong threshold and there in terms of alpha
0:19:35beta
0:19:36um and the gamma function
0:19:40okay so let's look at
0:19:41what these are
0:19:42thresholds threshold look like
0:19:44so what i thought it here
0:19:46um so here is the number of samples i'm making so if if i might sit point five
0:19:50that means i'm i'm making the number of measurements
0:19:53half the
0:19:54a a number of entries in the matrix
0:19:57what i have
0:19:57here so with point five five getting something around points it
0:20:02uh this is a surrogate for the ranks what it really means is you know it's it's the inverse of
0:20:06the oversampling sampling at eight point five
0:20:08multiplied by point six a gives a point three which means i can
0:20:12recover matrices up to
0:20:14right point three and
0:20:15so this but they should have said is for square measure
0:20:18some taking and twenty four ten
0:20:20and looking at point to and here it's point five
0:20:22i have to multiply this it
0:20:24point want it so it's saying if i observe one fifth of the entries of the matrix
0:20:28for for make that many measurements
0:20:30i can go up to ranks that are one ten
0:20:33time
0:20:34that's
0:20:35this threshold that we're trying to predict
0:20:37and that their rooms that i just showed you
0:20:39do the following predictions
0:20:41so that's the weak threshold
0:20:43this is the section of this is
0:20:45strong and just to show you
0:20:47how it improves on the previous results this was the previous
0:20:51a strong threshold and that that the dotted line which matches are section was the previous
0:20:56a weak threshold so we've improve them quite a bit but still
0:21:00it's
0:21:00quite a bit off
0:21:01um but it least in this regime
0:21:03it works well
0:21:04um
0:21:05and they get you know i i won't go into that the the technical details of this because this is
0:21:10that these results were all trumped
0:21:12but basically the difficulty
0:21:14in analysing this problem is that you have pointed is like this that you need to bound for all W
0:21:20it's little the way we did that was we
0:21:23essentially bound the expected value
0:21:25the E
0:21:26and then used some concentration of measure type results so if you can on the expected value of things like
0:21:32the
0:21:33and there's there and to do that
0:21:35then if you can prove which it's S and you use concentration inequalities of a lot of hard work
0:21:40yeah style
0:21:41but as i said after we set made it this paper
0:21:45we came up with another analysis
0:21:48um um and that analysis essentially gives this
0:21:50these
0:21:51solid curves that's the weak threshold this is the weak threshold of the current paper
0:21:56this is the sectional threshold this is the current paper this is a strong in this
0:21:59and paper
0:22:00and pretty much this is on the money now
0:22:03um so we're actually able to analytically project
0:22:06a threshold
0:22:07it
0:22:08um
0:22:10it's not this paper it's the i at paper
0:22:12i'll just make a little bit of a
0:22:15um advertisement for where this result comes from
0:22:18the difficulty in what we're trying to do is we're trying to extend a close
0:22:22approach approach to computing compressed sensing thresholds and those are difficult to do "'cause" it's based on grass an angles
0:22:28which you can compute in principle for polytope would for these nonlinear objects are much harder to do
0:22:34but you know student of mine against advertise for his
0:22:37work of a couple years back
0:22:40came up with an alternative approach for compressed sensing based on an escape through mesh
0:22:45type idea to compute L one optimization threshold
0:22:49and the good news is is this is analysis
0:22:52direct but it stands
0:22:54to
0:22:55uh the matrix
0:22:56case and allows a
0:22:57to basically
0:22:58um
0:22:59compute the threshold is that and and the results are are are you know
0:23:03analytical and i'll give you a sense of the so if you look at the strong recovery
0:23:08the number of measurements
0:23:09has to satisfy this
0:23:11so for example in the square case one and one is equal to and two
0:23:14this becomes sixteen or
0:23:16okay
0:23:17now if i have a rank R matrix
0:23:20roughly there's are and
0:23:22entries in and
0:23:24a two are N
0:23:25to be a specific and so sixteen are and is eight times that so you need eight times
0:23:30more measurements than the right
0:23:32strong recovery threshold
0:23:34and that's why here what you see this point here that
0:23:37is a strong curve it's one over eight
0:23:40um
0:23:42or they weak recovery this is what you get a and one and one is equal to to this becomes
0:23:46six R and and so you're three times more than the number of
0:23:51degrees of freedom of rank R matrix
0:23:54and um again this point here is one third
0:23:58um so i guess i'll stop here i mean so i i explain with these thresholds were and um we
0:24:02had results but as i said you know we can now
0:24:05these
0:24:06exactly and so it it's a
0:24:08so stuff
0:24:15can
0:24:16bit of i think that
0:24:21the so um first have to have made i'm i'm really and a a medium with things
0:24:26um and that
0:24:28did i got to correctly yeah mean
0:24:31and if i give you matrix
0:24:33and
0:24:34if that if you have um and measurement operate that it satisfies nice properties
0:24:39you can really actually we caff but really X like increase in the matrix four
0:24:44interest
0:24:45that's what the the of these rank minimization problem do so
0:24:49as i said there are a lot of the compressed sensing where you have a sparse signal when you make
0:24:54measurements of the entries and these of when you're combinations you can recover
0:24:58a signal
0:24:59um because you have sparsity in here if you have a low rank
0:25:03and you make
0:25:04measurements you know you see certain entries trees are a linear combination
0:25:07certain trees you can recover the entire matrix by to the fact that low
0:25:13thank
0:25:16hmmm
0:25:17huh
0:25:18oh thing
0:25:20yeah so the analysis for galician in so even in the
0:25:23um compressed sensing case
0:25:26um where you can do exact analysis for the the threshold is only in the gash she case now
0:25:31what people of there
0:25:33it is
0:25:35um
0:25:36so for example if you take an i i D or should matrix you can compute threshold that's with donna
0:25:40who did
0:25:41um if you use a tape
0:25:44not not i i yeah or shouldn't it's say are i E burn will these are other types of stuff
0:25:48and do our one optimization people observe
0:25:51aim
0:25:52um phase transition at the same point
0:25:54even you know if you look at sparse measurement matrices things like expander graphs and so on
0:25:59they seem to exhibit the same thing
0:26:01uh and Y that is not fully understood there's a paper or um in the proceedings of the ieee by
0:26:07don a where he talks about this
0:26:08the various thresholds and this
0:26:10universal looking behavior so we don't fully understand but it
0:26:14appears to so the analysis for the gal should is doable
0:26:18but it appears that you know the result is more universal
0:26:21and um we haven't done as much uh investigation for the uh
0:26:27matrix
0:26:28rank minimization problem is the compressed sensing his roughly we've seen the same
0:26:37um
0:26:38you you propose now and
0:26:40uh but i i one ask
0:26:43um
0:26:44if it is possible to find addition
0:26:48in
0:26:49these the rooms
0:26:50i i promising master
0:26:53i no i mean is so this is similar again you know so if you're familiar with
0:26:58um
0:26:59uh a even in the compressed sensing case when you look at the
0:27:03so there if and only if conditions for where and you can actually recover a sparse signal and its again
0:27:09in terms of the now the the null space conditions you have to look at
0:27:12every vector in the null space of your measurement matrix
0:27:15and it's so those in principle cannot be
0:27:19check yeah
0:27:19it it easy to show that it's and be hard to do so
0:27:22here it's even more so because you're null space is is is a tree
0:27:26um which is the reason why i when people want to analyse these you go to these ensemble of of
0:27:32measurement matrices "'cause" there you can say that this property holds with
0:27:36very high probability
0:27:38uh but you the quick answer is that no the mean if you give me a matrix
0:27:41there's no way i can show
0:27:46a special happened here the matrix to be reconstructed hands
0:27:51particular symmetry
0:27:53well yes um so if if it's
0:27:55symmetric
0:27:56um
0:27:57uh uh it yes i i i did that there and but roughly what happens is the threshold
0:28:03get a by half
0:28:05so certainly at a very end it does so
0:28:08um if you're matrix is symmetric
0:28:10basically here
0:28:12that to goes away that for them
0:28:14and if you have a was still what's for local toeplitz case i don't know but um
0:28:20for example if you know the matrix is um non-negative definite
0:28:24then it again improve so you need fewer pressure
0:28:27um toeplitz i don't know
0:28:29um
0:28:31but that's an interesting question because
0:28:33yeah uh if you look at the system i D problem which is an important one you be doing it
0:28:37for a hankel matrix which them
0:28:40a but look at the room like to question is is the if it if you know which all positive
0:28:44but
0:28:44that is that what every entry being positive
0:28:49i haven't looked at that we we've analyse the positive
0:28:52semi-definite case but i haven't looked at the case with the matrix as well
0:28:57um
0:28:59no
0:29:03okay
0:29:03thank you so much you can