0:00:13oh thank you
0:00:14uh a topic of my presentation as
0:00:17best for source for compressed sensing signal corps rate
0:00:19uh and i will introduce the a
0:00:21so or do not much of it over again
0:00:24in this presentation
0:00:26well to a given all of the presentation i will
0:00:29just
0:00:30sure the start with complex that compressed sensing problem or you construction problem and matching pursuits
0:00:35done try to you the motivation
0:00:37for incorporating and model to best such to the problem
0:00:41uh and i will introduce the a star and the algorithm
0:00:45and uh the most it's reconstruction performance on one and two D
0:00:50uh
0:00:51reconstruction construction problems
0:00:52that i i will conclude the presentation just for short summary
0:00:57well we have a or the scene a number of good presentations about the compressed sensing problem reconstruction problem
0:01:04um
0:01:04principally we are interested
0:01:06in
0:01:07recording K K sparse signal X
0:01:09from a reduced set of
0:01:11uh a measurement
0:01:12say the signal has been um and we take on the M measurements from that where and last and and
0:01:17and you would like to
0:01:18reconstruct this case sparse vector X
0:01:22of length and
0:01:23so the new well known as well construction can be written as here it's the we are interested in the
0:01:28minimum on as run on X
0:01:30that satisfies the observation
0:01:32um obviously is
0:01:34direct
0:01:35intractable so that and number of uh reconstruction approach has appeared in future
0:01:40well we can categorise them into
0:01:42uh well farming you categories than i listed here
0:01:45but in this presentation we will uh most to be interested in the creative process
0:01:52and uh in this
0:01:53next slide i will like to talk about
0:01:55uh yeah certainly of them
0:01:57the matching pursuits
0:01:59these are iterative algorithms that uh right i don't build up the solution from
0:02:04uh an initial zero source like the matching pursuit an orthogonal matching pursuit
0:02:10or they define a sparse solution as in the case
0:02:13for subspace pursuit or
0:02:15compressive sampling
0:02:16uh matching pursuit
0:02:18uh for our purposes the orthogonal matching pursuit is
0:02:21you most important one is the algorithms based on that
0:02:24uh so all this work it
0:02:27uh identifies one non-zero coefficient of its for each iteration
0:02:31uh
0:02:32it's not like this
0:02:34nonzero coefficients
0:02:36and the dictionary atom that has not something a product
0:02:39um at the residual prediction residual
0:02:42and then
0:02:43uh
0:02:44we compute an orthogonal projection of the received you already
0:02:48uh subspace spanned by a set of selected a time so that the residual after the projection is
0:02:54orthogonal to the uh subspace bias
0:02:57the set
0:02:59well this is all those the single best strategy
0:03:02so it selects one no
0:03:04one uh vector after the others or one non-zero coefficient after the other
0:03:08and
0:03:09in this work we will like to propose a multiple G
0:03:12that in state
0:03:13uh such the solution among the number of
0:03:16candidates as you can see here
0:03:18um
0:03:20so we try to so in the solution
0:03:22but i sort among a number of dynamically women candidates
0:03:25and
0:03:26provided and a property that selection algorithm to
0:03:30to uh
0:03:32find the bass bad or the most from my isn't and on that
0:03:35we can follow the most probable isn't and one peas next and that and finally reach
0:03:40the solution
0:03:41so you know the solve this
0:03:43model but problem i will introduce the a star of one matching in a is shot is still one P
0:03:47algorithm
0:03:48that combines the a star such which
0:03:51you're to gonna matching purse
0:03:53so first let's have a look at how we can represent present this
0:03:56compressed sensing problem this as was this problem with
0:03:59a source
0:04:00uh all this is true but
0:04:03i i want to
0:04:04give it for sake of completeness the not he represented each dictionary elements as you can see
0:04:09uh some
0:04:10dictionary dictionary must kind of on different that's
0:04:12model times and three
0:04:14or speak
0:04:15oh and and it
0:04:16each pad
0:04:17then is a candidate solution
0:04:19here for example we see a patch of length four
0:04:22as you can see it as a uh suppose scripts
0:04:24and it is suppose to the is the
0:04:26is is the number of the pads of is the talk but then four
0:04:29i the kind of a
0:04:31um H but is the center easy to
0:04:35and we compute a cost also based on those is the you i will come to this
0:04:39next slide
0:04:41so are in ms to build up a and dynamic label late the search tree
0:04:45a three to will be a star search
0:04:47and it each iteration we will first choose the best but on these
0:04:51and then expand this but
0:04:54so that we might have a
0:04:56all we of to all we now here
0:04:58we first initialize the so it's tree to i notes your i set the one so you
0:05:02on the one initial
0:05:04that and a three
0:05:05we select the best but its tree little use of this
0:05:08one single path
0:05:09and we start iterations by expanding this but bad
0:05:12but it's
0:05:13but children
0:05:14so is
0:05:15the the egg number of expect extensions is your stick the to be we update did use and
0:05:20uh a cost of the past for each
0:05:22pad and of to the tree and then it once again select the best but
0:05:26and one now this two pat
0:05:28but this turns out to be to it's expanded once again in the next iteration
0:05:32then once again the best but is selected until the convergence here
0:05:36and the convergence criterion is that
0:05:38oh we start when the best pad
0:05:40has it is that night
0:05:42okay
0:05:42uh
0:05:43so here for example we stop line K is for we stop and this back to select
0:05:48as the best but
0:05:49and a point
0:05:52to to understand this better be
0:05:54i would like to define a a three important stage of the algorithm
0:05:58first just really the initialization stage we choose
0:06:01in this state i nodes
0:06:02as all would set that as much some the product to why so this to the most remote step
0:06:07and we have got
0:06:08selection of the best but problem and here
0:06:10there is a problem of comparing pass with different a
0:06:14and that the problem is how we expand this dispatch
0:06:17well in other words how we a way to many pads and the three so that be or just track
0:06:23as for the best but selection we have got the
0:06:25so called sort function of a star so
0:06:28uh here you can see in this case
0:06:30this function here
0:06:32i have that the pad of length for and i would like to compare this to the second part of
0:06:35thing to
0:06:36so i have to compensate for this
0:06:39nine X six is two what's your you know to to have to compare comparable costs
0:06:42and i introduced this stocks a function
0:06:44to compensate or a a star that's a introduces this box or function to compensate for this
0:06:49nine existent C
0:06:51so yes
0:06:52uh oh the function should reflect and how much the residual four
0:06:56or would degrees give
0:06:58the but were complete or of length four
0:07:00and we need to now to define cost models
0:07:02uh it's not
0:07:03easy to find point "'cause" models always that exactly hold what we want to find some was an next like
0:07:08that
0:07:09generally and
0:07:10well the whole
0:07:12so as for the cost models i have a uh i would like to introduce tree scheme us
0:07:17the first one is an
0:07:19uh i did of scheme a your i better that of
0:07:21like to as and
0:07:23you correspond reason you
0:07:24and i assume you fight at in you north here
0:07:27this
0:07:28you know what would
0:07:29we use the residual by a constant imams proportional to the
0:07:33uh a out to norm of the observation
0:07:35so we can also see the general form here for a pack of length out
0:07:42uh a as a seconds
0:07:43X example of scheme are uh i once again have a same
0:07:47that here
0:07:49and i is assume
0:07:50addition of the you
0:07:51uh node
0:07:53nor vector to to do the presentation now
0:07:55uh read use system easy to why hard this proportional to the
0:07:59uh difference of the L two norms of the you'd use in the last
0:08:02uh to no
0:08:04so here
0:08:05a i one minus are two out to a normal of are one minus
0:08:08oh i know of
0:08:09are two
0:08:12and we can also write this generally is you C
0:08:14for any a of an L
0:08:17and finally the is
0:08:19and another other was more called the model to one
0:08:21here we
0:08:22assume that addition
0:08:24of the new node
0:08:25uh uh uses to is you by a constant
0:08:27ratio out of four
0:08:29is i'd ways of the selected based and one so if is you
0:08:34and this can be also general than as
0:08:37this form here
0:08:39so after we
0:08:40uh applied this cost models we can so that the best that that has the minimum cost
0:08:45and now we about the problem of expanding it
0:08:47well i a starting it's
0:08:49uh original form
0:08:50expanse all the children of the selected
0:08:52brian's mind this what in three will result in two men that's always that's and the power K it
0:08:57as will be done
0:08:58intractable
0:08:59so we have to buy some pruning
0:09:01uh the first printing mechanism is these of extensions put back pruning we
0:09:06we prune the the children of the three what it on the best
0:09:09B each the remaining three or be on at the best
0:09:12sure
0:09:13a three
0:09:13uh so that the today deer
0:09:16uh you know a you know products to do you is you of the no
0:09:20a second pruning is this takes size pruning it is the ms the number of that's in the tree to
0:09:25P
0:09:25and we prune the three of you number of paths exist the be and remote the pats
0:09:29that have a minimum cost a sorry of some costs
0:09:33and are we have to take care of it you want that
0:09:37as i was that permutations of nodes with an a
0:09:41i or temptation of nodes in a tree exists and we we get see that temptation of nodes with at
0:09:46that a Q will and an example is here
0:09:48we got that one and pay to it one at different lines
0:09:51and i is you see the first three nodes and pet one and a two
0:09:55uh share the same
0:09:57so i i than the prediction property i know that the residual here
0:10:01this node
0:10:02but example be equal to the residual you hear it this not so i'm sure that the but to and
0:10:07select
0:10:08you know five minutes
0:10:09expanded and this to we'll the exact exactly the same so i can that one and B two are Q
0:10:14but here for the case for P three is different it has five here in the middle that's not in
0:10:18the first uh three nodes of but one
0:10:21so this change everything and the pet one and pick three are not clue
0:10:24you can wouldn't as the you use here
0:10:26and here but not be the same in am
0:10:30so we can now illustrate this
0:10:32uh a single iteration of a of P with one initial node
0:10:35a a four packs a low and the tree and
0:10:38uh number of extensions restricted it to three
0:10:41so we got the initial pad
0:10:43and the best
0:10:44the initial three and the best but to select as a pet for with minimal cost
0:10:48and the best extensions do not to be to a what's to eight and nine
0:10:52uh it just pick to the you know the X to you of the bad
0:10:56so we first that the not to to bad for
0:10:59this doesn't change number of in the tree so we are fine
0:11:02and that we have to add in not eight
0:11:04uh this is also a new pad
0:11:06uh so we have no five that's in the three and the worst one
0:11:10now has to be remote
0:11:12and
0:11:13last we have to consider not nine
0:11:16he we see that the if we add not nine we will have a new pack that has a a
0:11:20cost higher than all the pets in the three so as we have or the four pass we ignore this
0:11:24but
0:11:25for one single iteration of the algorithm
0:11:29a no i would like to do the performance of the algorithm in of one D scenario
0:11:34we have the reconstruction of
0:11:36one D signals from
0:11:37a uh and and and
0:11:40observations the signal seven lines
0:11:42uh two hundred
0:11:43and fifth the six
0:11:44that is sparse to various between ten and fifty
0:11:47or K uh and the nonzero coefficients zero are drawn from these standard normal distribution
0:11:52yeah the red curve shows as the uh supplements of
0:11:56a a is only P V the multiplicative cost function
0:11:59and as we bought see from a B B C from the normalized mean square error that lower that all
0:12:04that of candidates
0:12:05in need subspace per basis pursuit and orthogonal matching pursuit
0:12:09and at the same is also true for exact reconstruction rates or
0:12:12so we are the algorithm is also boat
0:12:15E the others
0:12:17uh a a second case here is very similar to the first one except the point that be nonzero coefficients
0:12:22but room the stem from
0:12:24the uniform distribution between minus one and one
0:12:27and and here i'll but
0:12:28three "'cause" model is for a way P
0:12:31uh uh that to additive and multiplicative cost models
0:12:34and we see that in terms of normalized music what error this still out from the all it until all
0:12:39somewhere where
0:12:40the error is already
0:12:42well quite high
0:12:44and as for the equals
0:12:45construction race the simo also holds uh except for the it'd two
0:12:50well i and who is
0:12:51reconstruction rates force down so we can you conclude that the adaptive and multiplicative strategies issues are
0:12:57uh far or better for our purposes and up to
0:13:00a and the two
0:13:03so finally we have that some
0:13:05a we of you much problem
0:13:07oh sorry
0:13:08uh we have here to block process the images an eight by blocks
0:13:12and each block was you
0:13:14where the process to
0:13:16you for be uh
0:13:18so measurements
0:13:19to be fourteen sparse and you to to to T two to two and observations from each block
0:13:24so these are about long images
0:13:26and we see that we model the key a way in P here results in better
0:13:31because an a shows than all the other algorithms and you when increasing the
0:13:35uh
0:13:36number of and and branches per iteration increase of the performance our
0:13:42so here we can see uh two you just gets is the basis person reconstruction of lee and this is
0:13:46the is a of P reconstruction
0:13:48i hope the differences are reasonable for example here uh on the bottom regions or
0:13:53uh a in the U E T of he reaches already has region
0:13:57uh
0:13:57alright he this some be more visible now
0:14:00here is the basis posted reconstruction and the target can much and a star P reconstruction of clearly a the
0:14:07bomb is it's are P use much
0:14:09much closer to the type image and also here for the here region
0:14:12it's
0:14:13much closer to the target
0:14:16we can also compare really uh pixel
0:14:19errors of
0:14:21basis pursuit than a star when be well from the bed this person so is you can you when we
0:14:25couldn't as the has and a
0:14:27yeah and the boundaries of land the your do our minutes it's for but is would be clearly
0:14:31introduced much this or than that
0:14:35conclude i have here present a multiple or strategy that
0:14:38uh a mine's best first search and uh also matching pursuit
0:14:43uh in this algorithm are aim is to build up and find in clear all eight uh search three
0:14:48while a the pass that uh minimize the cost function
0:14:52as what this cost function we have introduced re scheme must to nine and scheme uh so called a multiplicative
0:14:57and of once
0:14:58we also introduce an additive
0:15:00scheme out but uh multiplicative at once seven perform better in the examples
0:15:06and we showed that a a like be but was better reconstruction than
0:15:09well one P S P and E P in
0:15:12in the
0:15:13i finally once also to note that the model of a was available at my website
0:15:18and you also introduce a real-time implementation of sue
0:15:21so thank you for us
0:15:33or it is okay cool
0:15:40uh well computation time us to take the was it depends on the number of nodes expanded in a three
0:15:46and i should
0:15:47say that this is
0:15:48much higher than a basis but in this case when you expand to and notes
0:15:51per iteration
0:15:53uh
0:15:55i can say that
0:15:56in real time i have
0:15:58i have my you pension in computer with the real calm software about ten seconds for
0:16:02for example
0:16:04or this experiment for one K for K
0:16:06equal to twenty five
0:16:07this experiment of five hundred on them but this once and something like ten sec
0:16:12um but
0:16:13uh
0:16:14in terms of computational
0:16:17a competition analyses you have to find a limit on the number of bats in the three
0:16:20you can find it
0:16:21very lose bounds bones those upper bound but this doesn't
0:16:25really represent the complex of the algorithm
0:16:27so we have working on that
0:16:29to
0:16:29two
0:16:30strain in this bound
0:16:32but
0:16:32i i not say anything more no
0:16:36oh the questions
0:16:47you can show that the L simple find the optimal solution only has some conditions on a future expansion cost
0:16:53and meeting if it is a lower bound on a
0:16:56real cost
0:16:57so how can you guarantee that you
0:16:59cost models and a a a a and in the at to model i really satisfying as a lower bound
0:17:05conditions do had it so for that
0:17:07um
0:17:08we have one might of proof for that
0:17:10but as i said here it's
0:17:11really that a hard to find some X
0:17:13models that um
0:17:15model
0:17:16sorry is decrease in the there it should at look you're of does not exist
0:17:19and and all that before if we had known we wouldn't the and you can reconstruction or algorithm that um
0:17:24that was hard to prove yeah yeah for or what one can
0:17:27one can probably is
0:17:28but what the what what is
0:17:30this that the uh show that this cost models able to hold in general
0:17:34um i think that
0:17:36but what's that can only be done and
0:17:38this
0:17:39this data it is constant this way
0:17:41uh similar to the way a star but not as in the sense that the stressed these forces
0:17:46a we have one work on that
0:17:48uh but
0:17:49this to use other web that that you select a spate very high is model exactly hold
0:17:55but then you have to expand to reach to the solution
0:17:58well what other than that
0:18:00i think it's
0:18:01one can that's problem
0:18:03the
0:18:04uh fines
0:18:06the
0:18:07say say be a find
0:18:09a a relation
0:18:11it's been this course and the exact okay
0:18:13and just one more comment uh there was a lot of work and the coding theory community on least equal
0:18:18so this falls into the category of
0:18:20stuck partial list
0:18:21so that was at least lanky algorithm that works in very similar ways
0:18:25partially constructing least
0:18:27a candidates pruning the list and
0:18:29you at the end Q a list of K
0:18:32can you case your T you only one path
0:18:35or yeah they'll have list can
0:18:38no that you can you can test
0:18:40you
0:18:40for a future test
0:18:41see i one of these can uh
0:18:46oh so we be be of is the keep a number paths
0:18:48i the other two but finally to turns one pat you you keep line
0:18:51because it is equal you keep a list out your search that you keep a list that the end as
0:18:56well as you have a very small
0:18:57sampling can uh D
0:19:00and you can play with that lee
0:19:01as the next
0:19:02a side information
0:19:03a side which
0:19:05oh
0:19:06the algorithm
0:19:07a in our case of turns on the one that
0:19:09okay thank
0:19:12yeah a question
0:19:18it's uh think to speaker again
0:19:20thank you