thank you very much um for the introduction

yeah so i will talk about um uh something that's i guess a little bit different from the rest of

the talks of the session sessions about compressed sensing and i guess sparse representations so this

fall

a a more under sparse representation and compressed sensing all be talking about the you rank minimization problem

um so problem which you have a low-rank matrix

and you make a linear measurements of its entries and want to recover the matrix

um

so you can think of

for there being a sparse representation in the sense that it's a low-rank matrix

so the degrees of freedom in the matrix is not

the number of entries in the mate

um um and i want to look at the um nuclear norm minimisation an actually threshold

but uh determining thresholds that allow was to recover the matrix a a perfectly

joint work with my graduate students some at ten

yeah

okay so

or say a few words about the rank minimisation problem uh uh why it matters and some applications of of

where it comes up and the you relaxation

as the nuclear norm relaxation

um and i'll focus is i said in this talk on trying to predict the um recovery thresholds for this

algorithm or mentioned some prior work then i'll

i i guess um formally define the problem we want to look at

um the main contributions of the paper or are you know

type conditions for

a recovery and then obtaining a new recovery thresholds and not talk about the all say if you word

about the technical details involved in this analysis and the reason why i'm gone only say a few words is

because

and as all mentioned towards the end after we submit this paper

um we obtain some new results which kind of trump every

are going to say here today

um but i you know i have to say them anyway but i'll tell you something about new results still

be presented it to high a T and a couple of months and saint petersburg

um so if and you are gonna be there

a i may here about them there

okay so what is the problem we're looking at so

um this is this

session i guess on sparse representation and so sparse is become very popular

compressed sensing is the can you know

um

perhaps most popular

um problem in this area

another situation where you get sparsity is when you have a low-rank matrix

okay so

if you think of a low-rank matrix

um a general matrix that's say

uh and one by an two as and one times and two degrees of freedom in it if it's low

rank it has much less

a roughly the rank times the dimension of the matrix

um and so if you think of the matrix in terms of its singular values you know the singular values

are sparse

if it's a

low rank matrix because um most of the entries of the singular values will be zero

um this problem comes up in a variety of um um applications the comes up a lot in control

a in particular in system identification if you want to

if you have measurements of the system and you want to um

you know figure out a a low order system that matches that

you know you if you look at the hankel matrix the comes out say from your measurements that has to

be low rank and so on

it comes up in a collaborative filtering problems you may have heard of the net flicks problem

comes up there

comes up but a lot of learning problems and recently people back to actually looked at you know traditional cs

type problems like last graph clustering

and that been able to show that you can put it in gennan into a rank

a minimisation problem if you look at the adjacency matrix of a graph you want to say clustered to to

things are almost clique like

uh it essentially boils down to identifying a low rank components

in the um

a if you will adjacency major

it comes up you know a lot of application

a let me formally define what it is so we have some matrix X not which is a rank

and we're given some measurements of its components

and so i'm i'm representing that by this script a

and the only thing that matters right no use me for script a is that this is a linear operator

so i'm

making linear and measurements if you will

of the entries of this matrix

and you know

uh the measurements are such that you know um if this is a and one by and two matrix there's

and one times and two unknowns your but i making much fewer measurements

nonetheless i want to be able to recover the matrix "'cause" i know what's low right

and so the problem that will try to solve this to minimize

among all matrices that are consistent with what i measure the lowest rank

um again this is a difficult problem as the compressed sensing problem is

a a because of the fact the rank function is not convex and so

the approach to solve this is to relax

the rank

with the nuclear norm of X the nuclear norm being the sum of the singular values

of X

okay so this is similar to

compressed sensing where you take an L zero norm and you relax

an L one norm

of this relaxation was first and i think in that phd thesis of marry "'em" files oh and and ninety

nine or two thousand

um it goes back to that

and you know just as in the L one case the reason why this makes sense because the nuclear norm

in some sense

tightest convex relaxation

for this problem

um

in the same sense that you know the L one ball is the tightest if you will convex

body that contains

you know um all of your sparse vectors this is the the the if you look at the nuclear norm

ball it's

smallest convex part that contains all

rank one matrix

okay now let me say a few words about the measurements so there's two types of measurements of people have

looked at and right minimisation problem

one is where you actually is

a entries of the a tree

a random

in this case the problems called matrix completion so you see certain entries of the matrix and you want to

uh figure out what the remainder of the entries are

um and you know so the net flicks problem certainly falls under this

kind of a

uh

situation

uh another type of measurement is when the measurements that you get a are actually inner product

of the entries of the matrix time some measurement matrix a a eyes

a inner products is really trace of a are

that

um so you just form linear combinations

so each measurement to some linear combination of the entries of a

and this is natural actual for example in in quantum tomography more type problems because

you are act

is typically you know the state of your system and it's a linear combination of a few pure state so

it's low right

and in quantum the type of measurements you make or of of this type and so it's very natural there

X also somewhat actual

in some other applications

okay so for the matrix completion problem there's an analysis that's been done there's conditions on when you can actually

recover from

a looking in entries of the matrix it require something called incoherence

um and the proofs of this use dual certificates some uh by guess convex optimization technique

the inner product measurement these types of measurements are much closer to the types of measurements that we're familiar with

and compressed sensing when you do

you know and the measurement matrices and so a lot of the compressed sensing results

um can be

a translated to this

a particular case here to the inner product K

in particular things like a write P

where you have a right be conditions say on your measurement matrix that guarantee

but you can recover things in compressed sensing have their analog see here there are alright P type conditions on

these they eyes

the guarantee that nuclear norm minimisation works

likewise similar to the null space property is that we have and compressed sensing there null space property

again and all i'll talk about these a bit more

um in the

yeah i guess in this uh inner product measurement case that allows nuclear norm minimization were

um

and so it's in fact these that'll be the focus of

today

okay so so let me say if few words again about prior work uh with regards to a right P

um so people as i said that looked at is this and and M means nuclear norm minimisation they've studied

it based on a right P

in fact the first paper that is it at this is from two thousand seven it's by wrecked files a

real oh um

and they're are the first row actually look at the

nuclear norm minimisation using

uh ideas some compressed sensing and they gave some results on when

you can actually be cover low rank of made

um doesn't isn't planned have recently improved the results

again using our i P type results what they've shown

is that to recover

a a a rank are and by and matrix you need on the order

are and

a a measurement

so if you think of

you know a rank

are and by N matrix

no there's roughly

are are and

um degrees of freedom in it and so this is saying you need of the same order measurements

to be able to recover

um and you see will will improve on this results quite a bit

um people have also looked at to recovery using that the null space property and all talk more about this

um

in the null space case it's easiest to do the analysis when your measurements are i i D gal shannon

oh i'll talk about that a bit more

and

what people of observed one they actually run algorithms like the similar to the compressed sensing case there's of

type phase transitions so

if you're fixing the number of measurements

you're making and if the dimensions of your matrix or fixed and everything is large

you know as you increase the rank

some phase transition beyond which you can never recover

and below which you almost always recover and so

a what we want to do is to see if we can find analytically what that

as transition is

um

okay

and so

a a a a a a few years back with better act where you shall we tempted

um to determine this phase transition and i'll show you some of the curves later in terms what we did

that

um um

and so what we're trying to do in this paper was to extend a lot of the results that exist

conventional

compressed sensing and so

um donoho again a few years back

um for conventional compressed sensing when you have i i D gal should measurements

determined

um what these phase transitions were

um and um use the grassmann angle approach to are all talk more about this in a moment but the

idea is that you can recover

signals if the null space if you will of your measurement matrix

um

lives outside certain polytope sent to to guarantee that they'll about side those certain polytope to to

compute the grasp an angle and so

uh the difficulty in the matrix problem is

that the null space condition no longer gives you a polytope but something nonlinear and so this approach does not

stand which is why we had difficulties here and

or or get into those details of what why why things are much more complicated to

okay

um let me say something think about the uh

types of recovery

is that we want so will introduce three notions of recovery

there's a strong notion

oh if i have a collection of i have a measurement matrix

also say you know i have strong recovery ability

if i can recover any low-rank right matrix up to a certain rank

okay

um if i can recover all matrices a call that's strong

i'll call sectional recovery ability if it's not for all low-rank matrices

but all matrices

um who was

column and row spaces are fixed so are looking at low right matrices whose column and row

roll us fans are fit

rather than that it's are so if i could cover all of those are call it's section sectional

we recover ability is a recover of a particular low right matrix of i give you a low rank matrix

if i can recover it all call it week

and so strong if you think of it applies

to being able to recover all matrices

we can some sense

a a means that i'm able to recover almost all my

in fact when you do simulations what you actually encounter is the week one "'cause" you could never check whether

you recovering every matrix

site

um but this is

the the transitions that you observed in in practice not sure you curves toward the ends of the talk all

correspond to the week special

again this have natural counterparts and compressed sensing that people of an

um and so what i'll do in this paper is i'm gonna if you strong

uh recovery conditions

uh for all three of these that improve on what's there in the literature

and based on the were able to actually

improve on

um bounds on the phase transition

okay so let's go to this

a little bit more detail

um so again so the the measurements that we're looking at their some linear mapping so they take

and and one by and two matrix that's oh right matrix i'm trying to

recover

and maps it to some and measurements and

here represents the number of measure my

and so am is a lot less than say and one times and two are not seeing every entry of

matrix a making fewer measurements but i want to recover a given that i don't things are

low rank

and again here

the way i am going to you these measurements is that each measurement

is obtained by an inner product with an i i yours

"'kay"

and so what we mean my strong recovery ability which i alluded to on the previous slide

this strong recovery threshold is the largest

number or are for the right of the matrix so that all your and read matrices with rank R

can be recovered from the new norm relax ish

okay

um and so clearly

and one an N two goes up you

imagine the right will go up

um

uh and so it's a

but actually the other way around as and one and two goes of the right goes down because the from

fixing the number of measurements a more degrees of freedom

and if i increase and um of course are will

and so here's the theorem

um

and it's actually this there um that is of this to to analyse things better that

in in in the practise of this there is if and only if it says

any matrix of rank at most are

can be recovered via nuclear norm minimisation by of this if and only if

um it's an null space condition so for any

matrix W that's in the null space of this operator so

script a applied to W Z able to zero so any matrix in the null space again this is a

linear greater

if the following property holds for all W than as i said it's if and on if and and and

what that property is is that you look at W

you compute its singular value

order order them and if this some of the first are are are uh where R is the right you're

looking for is less than the sum of the rest

then that's the condition

and we need it for every double

uh again those of you might be familiar with the um no space condition in compressed sensing this is very

much the analogue of that in compressed sensing again you look at the

null space of your measurement matrix

and you say you the condition is the absolute value of the are big

should be less than the absolute value of the sum of the remainder trees and so what happens in the

matrix case is the absolute value gets replaced by

that's all that happens here

the difficulty is that in the compressed sensing K

those inequalities is that you got

essentially give you a polytope because they're all if you will linear

in W here it's much more complicated because of the presence of this

about

so these inequalities none of them are

uh

okay and so there's a pro for this which is in the paper but i won't go into it's based

on some

majorization zero

and again it's really this condition that's allowed us to improve the

there there was no

okay

if you look at this section all case so

the you recovery for in in the sectional sense is that

you know so what you do is you P

um sub-spaces as S P and ask Q

of say dimension R

and you defined projection operators on to these two sub-spaces and you say something has a

uh sectional recovery if all right are matrices whose columns spans rest P S Q can be cover

um um and then there's a condition again if and only F for that which

um you can you look at the null space of your measurement it's W when you require part this

to all the nuclear norm of P W Q where P and Q are these projection matrices should be less

that's a again it's an if and only if condition

and then finally the week

um threshold is for an arbitrary X not you want

it to be recovered

up to rank are again through a nuclear norm minimisation

um and the if and only if condition here is the following so you take your X not you do

the S D

and basically require a for everything and then all space

at this condition holds it's to trace condition person of here

um

so that's what we need to analyse

and the difficulty in analysing this is that you need for example these conditions to hold for every element W

in the null space of your measure

now when you're measurement matrix is yeah our it turns out that the

elements in the null space also get should distributed

um and so you kind of want to condition to this hold for all gal oceans in in that null

space and that's

where the difficult

um

uh again again just introduce a some notation for i show you the main result and the curve so again

the matrix is gonna be and one times and two

i'll take N one less than N two

um

and so

the aspect ratio think of that as being the quantity a alpha

i'm gonna look at a rank

that

say data times and one where and one is a small of a small once a bit is less than

one L was less than one

and then again the number of measurements i'm going to make is proportional to the size of the matrix and

one and two and again the proportionality less than one which would be know

and so what i want to do for example is i'm going to say

and one and two and you know the number of measurements i'm making an i want to see how big

are

get that will be the the threshold

and to describe a threshold i need this quantity

gamma of alpha and beta which are parameters of the

problem

uh and it's basically defined is this quantity here

expected value of the sum of the first alpha beta and

um singular values of some i i D J made

with this normalisation becomes a

yeah

a quantity independent of that

okay so that's what this

a and it can be computed at give the expression here but

a standard random the matrix there will allow you to the their close form expression

um and so there's is of the results so we have

um bounds on the threshold this is for the week sectional and strong threshold and there in terms of alpha

beta

um and the gamma function

okay so let's look at

what these are

thresholds threshold look like

so what i thought it here

um so here is the number of samples i'm making so if if i might sit point five

that means i'm i'm making the number of measurements

half the

a a number of entries in the matrix

what i have

here so with point five five getting something around points it

uh this is a surrogate for the ranks what it really means is you know it's it's the inverse of

the oversampling sampling at eight point five

multiplied by point six a gives a point three which means i can

recover matrices up to

right point three and

so this but they should have said is for square measure

some taking and twenty four ten

and looking at point to and here it's point five

i have to multiply this it

point want it so it's saying if i observe one fifth of the entries of the matrix

for for make that many measurements

i can go up to ranks that are one ten

time

that's

this threshold that we're trying to predict

and that their rooms that i just showed you

do the following predictions

so that's the weak threshold

this is the section of this is

strong and just to show you

how it improves on the previous results this was the previous

a strong threshold and that that the dotted line which matches are section was the previous

a weak threshold so we've improve them quite a bit but still

it's

quite a bit off

um but it least in this regime

it works well

um

and they get you know i i won't go into that the the technical details of this because this is

that these results were all trumped

but basically the difficulty

in analysing this problem is that you have pointed is like this that you need to bound for all W

it's little the way we did that was we

essentially bound the expected value

the E

and then used some concentration of measure type results so if you can on the expected value of things like

the

and there's there and to do that

then if you can prove which it's S and you use concentration inequalities of a lot of hard work

yeah style

but as i said after we set made it this paper

we came up with another analysis

um um and that analysis essentially gives this

these

solid curves that's the weak threshold this is the weak threshold of the current paper

this is the sectional threshold this is the current paper this is a strong in this

and paper

and pretty much this is on the money now

um so we're actually able to analytically project

a threshold

it

um

it's not this paper it's the i at paper

i'll just make a little bit of a

um advertisement for where this result comes from

the difficulty in what we're trying to do is we're trying to extend a close

approach approach to computing compressed sensing thresholds and those are difficult to do "'cause" it's based on grass an angles

which you can compute in principle for polytope would for these nonlinear objects are much harder to do

but you know student of mine against advertise for his

work of a couple years back

came up with an alternative approach for compressed sensing based on an escape through mesh

type idea to compute L one optimization threshold

and the good news is is this is analysis

direct but it stands

to

uh the matrix

case and allows a

to basically

um

compute the threshold is that and and the results are are are you know

analytical and i'll give you a sense of the so if you look at the strong recovery

the number of measurements

has to satisfy this

so for example in the square case one and one is equal to and two

this becomes sixteen or

okay

now if i have a rank R matrix

roughly there's are and

entries in and

a two are N

to be a specific and so sixteen are and is eight times that so you need eight times

more measurements than the right

strong recovery threshold

and that's why here what you see this point here that

is a strong curve it's one over eight

um

or they weak recovery this is what you get a and one and one is equal to to this becomes

six R and and so you're three times more than the number of

degrees of freedom of rank R matrix

and um again this point here is one third

um so i guess i'll stop here i mean so i i explain with these thresholds were and um we

had results but as i said you know we can now

these

exactly and so it it's a

so stuff

can

bit of i think that

the so um first have to have made i'm i'm really and a a medium with things

um and that

did i got to correctly yeah mean

and if i give you matrix

and

if that if you have um and measurement operate that it satisfies nice properties

you can really actually we caff but really X like increase in the matrix four

interest

that's what the the of these rank minimization problem do so

as i said there are a lot of the compressed sensing where you have a sparse signal when you make

measurements of the entries and these of when you're combinations you can recover

a signal

um because you have sparsity in here if you have a low rank

and you make

measurements you know you see certain entries trees are a linear combination

certain trees you can recover the entire matrix by to the fact that low

thank

hmmm

huh

oh thing

yeah so the analysis for galician in so even in the

um compressed sensing case

um where you can do exact analysis for the the threshold is only in the gash she case now

what people of there

it is

um

so for example if you take an i i D or should matrix you can compute threshold that's with donna

who did

um if you use a tape

not not i i yeah or shouldn't it's say are i E burn will these are other types of stuff

and do our one optimization people observe

aim

um phase transition at the same point

even you know if you look at sparse measurement matrices things like expander graphs and so on

they seem to exhibit the same thing

uh and Y that is not fully understood there's a paper or um in the proceedings of the ieee by

don a where he talks about this

the various thresholds and this

universal looking behavior so we don't fully understand but it

appears to so the analysis for the gal should is doable

but it appears that you know the result is more universal

and um we haven't done as much uh investigation for the uh

matrix

rank minimization problem is the compressed sensing his roughly we've seen the same

um

you you propose now and

uh but i i one ask

um

if it is possible to find addition

in

these the rooms

i i promising master

i no i mean is so this is similar again you know so if you're familiar with

um

uh a even in the compressed sensing case when you look at the

so there if and only if conditions for where and you can actually recover a sparse signal and its again

in terms of the now the the null space conditions you have to look at

every vector in the null space of your measurement matrix

and it's so those in principle cannot be

check yeah

it it easy to show that it's and be hard to do so

here it's even more so because you're null space is is is a tree

um which is the reason why i when people want to analyse these you go to these ensemble of of

measurement matrices "'cause" there you can say that this property holds with

very high probability

uh but you the quick answer is that no the mean if you give me a matrix

there's no way i can show

a special happened here the matrix to be reconstructed hands

particular symmetry

well yes um so if if it's

symmetric

um

uh uh it yes i i i did that there and but roughly what happens is the threshold

get a by half

so certainly at a very end it does so

um if you're matrix is symmetric

basically here

that to goes away that for them

and if you have a was still what's for local toeplitz case i don't know but um

for example if you know the matrix is um non-negative definite

then it again improve so you need fewer pressure

um toeplitz i don't know

um

but that's an interesting question because

yeah uh if you look at the system i D problem which is an important one you be doing it

for a hankel matrix which them

a but look at the room like to question is is the if it if you know which all positive

but

that is that what every entry being positive

i haven't looked at that we we've analyse the positive

semi-definite case but i haven't looked at the case with the matrix as well

um

no

okay

thank you so much you can