0:00:13so a good morning everyone i'm mark
0:00:15to that's are a set
0:00:17and uh i'm getting to
0:00:19present you work with uh
0:00:21i i with sample class you and which machines ski
0:00:24and well or from that to come should but i in
0:00:28so and the title of my tool is an extension of the ica model
0:00:33using that and on
0:00:36so here is the outline of my to a local a first introduce the problem
0:00:40then a a present you the extension that type prepare of uh be independent comp and uh analysis model
0:00:47a start with a recall in the classical know or mixture of independent sources and then
0:00:52i will present uh what i consider the depends sources and lack and variables that that
0:00:59a and this context i will then uh yeah
0:01:02consider uh the problem of parameter estimation
0:01:05in in data
0:01:06and this will be based on a so called iterative conditional estimation method
0:01:11so please see that there are two at an ins ice
0:01:14a a
0:01:15which you all know and
0:01:16i C E which is iterative conditional estimation
0:01:20then i present you the separation method
0:01:23and uh and uh i show you a few simulations uh of in the case where the last and as
0:01:28i i I D and in the case where to not
0:01:30and finally a we conclude my to
0:01:34so
0:01:35you probably only blind source separation or were independent come analysis
0:01:39as well known
0:01:41and the use you
0:01:42the usual assumptions are that's
0:01:44the sources should be statistically mutually independent
0:01:47uh they should be not ocean
0:01:49and the observations are
0:01:52given by a linear instantaneous mixture of the
0:01:56however
0:01:58uh
0:01:58it is known that um
0:02:00independence is not always recall
0:02:03for example uh you may uh
0:02:06re you may rely only on to relation in the case of a second to hold of methods and case
0:02:11but you use non stationarity or or colour of the sources
0:02:14and the case of uh source which belong to a finite alphabet then um
0:02:20may also uh
0:02:21re independence assumption that there are all other very specific cases that uh was started
0:02:26yeah
0:02:28the objective of this talk it is clearly to relax
0:02:32the independence assumption in blind source separation
0:02:35and uh for doing this that propose a new model
0:02:38where a separation is possible
0:02:41a thing and uh where the depends is controlled by a lot
0:02:46so let me uh give you are limitations this is a classical i consider that i have capital T samples
0:02:53that of the observations are denoted by X of to you know and capital and observations capital and sources given
0:02:59by S F T
0:03:00a a is the unknown mixing matrix a square matrix invertible and at each time instant the observations X T
0:03:07are given by the predict eight times as of
0:03:11i am in the blind count takes that is only a X
0:03:14which consists of the samples uh
0:03:16the is available
0:03:18and the objective of a i C A is to estimate the sources
0:03:21know the was it is equivalent to estimate D
0:03:24which is
0:03:25and in as the inverse of a or an inverse of a a a a to someone and be you
0:03:32so here is a figure that probably most of you uh did uh already C
0:03:37uh as as a way to introduce use um
0:03:39to introduce a
0:03:40independent component analysis
0:03:43i consider the classical case with source whether there are two sources S one S two S one is on
0:03:48the
0:03:48how reasonable X as a stream and the vertical axis this is a of the left
0:03:52uh figure
0:03:54um and uh they are independent uniformly distribute
0:03:57when when you makes
0:03:58uh of the of what the matrix a which is given here
0:04:02then you a you have a the observations on the right i project here um the different samples of
0:04:07uh X one and X two
0:04:10a a a a different time instants and to you C basically that
0:04:13uh you can uh separate the sources because you the different slips of or in on of the parallelogram gives
0:04:20you
0:04:21uh the um
0:04:24columns of the at mixing matrix a
0:04:26okay this is well now and now comes
0:04:28then there's uh
0:04:30the central idea of my talk
0:04:33what i'm saying is that if ica sixties in
0:04:36separating sources which and this distribution
0:04:39i should be able to make do something and to uh perform separation if the distribution looks something like that
0:04:47so the idea is to consider source
0:04:51which have
0:04:52a distribution which is partly independent and partly depend
0:04:58and
0:04:59in order to uh
0:05:00try to find another isn't the idea is to say okay
0:05:04if i am able to your race
0:05:06the points in green
0:05:07or to find
0:05:08the which are dependent and a as a sense
0:05:12then i should be able to use
0:05:14classical any um actually
0:05:17be able to use any a classical ica algorithm in order to form
0:05:22a blind source separation
0:05:25okay so the question is how can i model such
0:05:28type of distribution such a a of thought of sources
0:05:32well if you are of its uh carefully we you was you see that
0:05:36actually you have a mixture of two probability density as
0:05:40and whether the probability P
0:05:43between zero and one of the sources are independent and when the probability one minus P the sources are U
0:05:48penn
0:05:50and one possible way
0:05:52which is
0:05:52a classical to uh
0:05:54to model as this is to introduce a lot and
0:05:57a process are of T just hit
0:06:00and to say if or if is equal to zero them
0:06:03the sources are independent if a of T equals
0:06:06one then the sources or depend
0:06:10so here i'm writing the same i i uh in more uh mathematical trends
0:06:14i consider that i have a hidden clues R of T V such that can to show you on are
0:06:20the components of as
0:06:21one two S
0:06:23or or uh independent of this is given by the joint a probability distribution was is she's written
0:06:28uh of the screen
0:06:31and uh of T takes to use
0:06:33either zero or one if R of T is equal to zero then
0:06:37the usual assumptions
0:06:39uh you made in ica appliances of the components are
0:06:44independent
0:06:45uh none fusion
0:06:47and if R T then you have the dependence which
0:06:51can choose as you want which i will process specify like
0:06:56okay that's the model i can
0:06:58no i'm moving to the problem of a parameter estimation
0:07:03well first let me say that uh a you can
0:07:06i i did not say anything about are
0:07:09you have several possible at is
0:07:11to possible of is to model as a or as an i I i T but only process
0:07:15but is uh
0:07:16or or cool zero with to P or uh equal uh one with probably one minus P and this is
0:07:22what i i D you can also consider that are is given by a markov chain
0:07:27well this works
0:07:28anyway the objective is to estimate
0:07:30a set of our me which consist of but inverse of the mixing matrix and
0:07:34or parameters images of or
0:07:37as i told you before if i am able to you raise to green points that this be new are
0:07:42and it
0:07:44well i think you i can perform ica C which it is quite
0:07:48a easy i can estimate the power it is of a or just by counting the number of of utterances
0:07:52of
0:07:53uh zero the number of times
0:07:55or is equal to zero can give me a an an estimate of P
0:07:59i can see that the in sometimes where depends of the sources holes
0:08:03consider only a restricted uh a sub sample of the
0:08:07observation samples and perform my uh favourite i it
0:08:11in other words
0:08:13i i have a complete data estimator that is if are and X are both available i have an estimator
0:08:20or of the clan it is but i interested
0:08:23and fortunately or fortunately leave because a
0:08:26i can give the stalk
0:08:27we are a in in and complete data um
0:08:31context that is
0:08:32X
0:08:33only is available on the R S and now
0:08:35and at this point please note that a a a a a i makes to different um
0:08:40three different I D's is the idea of complete in data we first to the fact that are is no
0:08:45or a known and the fact that
0:08:48a S is and known
0:08:49is uh uh what to to to to that
0:08:52a to say that S is in and i just say that i'm in a blind
0:08:55come
0:08:58okay
0:08:58so here is the second important idea of my talk
0:09:02for in complete data estimation there is this uh
0:09:06quite uh if uh
0:09:07efficient
0:09:08i to perform
0:09:09uh estimation
0:09:11and
0:09:11this algorithm is called ice E iterative conditional estimation
0:09:16and it consists in uh the phone we first initial as the power mean value
0:09:22oh then
0:09:23you can you
0:09:24the expectation of the complete data estimator
0:09:28no i X
0:09:29and you H rate
0:09:30uh a it as that
0:09:32and uh uh you but obtain converge
0:09:36that
0:09:37some
0:09:38uh of the components of a a T to are not computable if you can compute the uh
0:09:44the a posteriori
0:09:45the posterior or uh expectation
0:09:47then you have another possibility
0:09:49you can just draw a random hard labels according to the little of are no an X
0:09:54and then replace
0:09:56the egg
0:09:56dictation
0:09:57well uh uh uh just a some a seven divided by a K which is number of real as it
0:10:03relies H
0:10:05so
0:10:05if you have a look at i see it has quite a we uh requirements
0:10:11you need a complete data estimator here just an ica algorithm
0:10:15will we use the job
0:10:17and you need to be able to you to calculate the posterior expectation or
0:10:21much weaker
0:10:22to draw are according to
0:10:24the new of or no
0:10:27and here in the model i
0:10:28but
0:10:29i propose the the problem is to calculate the probably of are and X
0:10:34well if our "'cause" i I D it's just a even by the base room she's uh written here
0:10:39if or is a mark of chain you can also do it you just uh a need to uh two
0:10:46resort to uh
0:10:47a one in forward backward algorithm which are is you to copy
0:10:51the probably of are no
0:10:54okay
0:10:56so let me one you have uh a a a a a a problem that you may encounter a here
0:11:00is it's
0:11:01the distribution of the sources
0:11:03however
0:11:04you all do that i has some be you to this so
0:11:09when you perform ica
0:11:11uh at a given my yeah i
0:11:13E
0:11:14a iteration you don't new
0:11:16which
0:11:17uh
0:11:17permutation mutation you obtain
0:11:19so the as the uh inverse metrics you found correspond to
0:11:23left or through the right distribution
0:11:25so to you have all this problem
0:11:27we decided it
0:11:28to
0:11:29consider for the ice
0:11:31E algorithm that
0:11:33the uh
0:11:34distribution
0:11:35off uh the sources look
0:11:38like this this is
0:11:39uh invariant
0:11:40by us a permutation and it avoids any problem of course this is not true distribution that true distribution is
0:11:48uh at the top and
0:11:50for a calculating
0:11:52the little of are no X with used to consider
0:11:55that this is this to this to
0:11:59so um
0:12:00just you well i just fruit it's more uh
0:12:03more and more mathematical terms
0:12:05you have a uh the joint density of are are S ask just given by a yeah
0:12:10top equation
0:12:12and the simulated data is uh as a written in the
0:12:16uh column on the
0:12:17the middle and the assume distribution for i C E is a written on the right to you see you
0:12:23have a
0:12:24uh
0:12:25well and is a normal do L is it had not that's a double exponential low
0:12:30and uh we just so much was it uh
0:12:32in
0:12:33the case for
0:12:34the assume distribution
0:12:38okay so no columns
0:12:40my combine i C a a i C E estimation algorithm
0:12:44so you first initialize the power
0:12:49uh i zero and T zero
0:12:51and then you achieve the the
0:12:53the following steps
0:12:54you first calculate the put the posterior of are new X
0:12:59having this do
0:13:00you can draw far tables
0:13:02are
0:13:03uh
0:13:04according to the post your a little of are X
0:13:08there there uh whether the current value of the parameter
0:13:10then
0:13:11you select the instant
0:13:13times where R is equal to zero
0:13:16and you perform an ica algorithm
0:13:19when
0:13:20considering only if these samples where R is equal to zero and you just ignore the part
0:13:25and you're also
0:13:26a
0:13:27the power to P which is the probability that
0:13:30uh
0:13:31what that uh the sources are are or or not
0:13:34a condition
0:13:36and it work depend
0:13:38um in the case of a markov chain is it's uh almost of the same thing you just can have
0:13:43a more complicated step in order to do
0:13:45a power just but it's
0:13:47it's basically the same
0:13:49and you can use
0:13:50any uh favourite algorithm in our simulations we could use J but the other
0:13:55uh the all those you work well
0:13:58so here here yeah a a a few simulations
0:14:02uh you have here
0:14:04the average mean square error on the sources depending on P so the probability of that uh the sources are
0:14:11independent depends P E one is the case where
0:14:14uh the sources are effectively independent
0:14:17the assumptions of i C holes
0:14:19so we have green
0:14:20the case of complete got a that is are is known
0:14:24i perform just a a a a a a a or ica on uh on the
0:14:28restricting a a set consisting of on a uh a a of the
0:14:32of the samples where independence whole so it's a a lower bound of the mse that uh you can tame
0:14:38you have a in but to the case were just ignore
0:14:42and depends to make as if the sources were
0:14:45independent although they are not just apply J
0:14:48and you can see that
0:14:49a a one P decreases is that is one that's also become no longer depend
0:14:54of course J
0:14:55phase to uh to separate the source
0:14:58and in read red
0:15:00you have a uh the results given by my i agree with them
0:15:04which means uh it's is it in complete data estimation
0:15:08and you can see that we are somehow uh we have um
0:15:12quite interesting performance
0:15:14in the case where P is greater than say
0:15:17do you point for two point five
0:15:19depends what you
0:15:20expel
0:15:22here you have basically the same a results
0:15:25the only thing you can use as as that of the results uh remain true one um
0:15:30whatever whatever the sample size maybe
0:15:33and finally a lets me uh out that a you can uh use the same at that
0:15:38in the case where R is a markov chain
0:15:41and in this case
0:15:43you can see that if you uh take into account the fact that are as of of close you are
0:15:48able to improve
0:15:49the performance
0:15:50which is you here we uh we we generated a lack the is going to a of change
0:15:56and a separate the sources either ignoring the smog uh property or
0:16:01uh using it and we obtain better performance with uh
0:16:06using the mark of prob
0:16:08so a conclusion confusion proposed in your model which consist of a linear mixture of source
0:16:13the sources are dependent
0:16:16and
0:16:17uh there exists a lack and were hidden process which controls the depends what the dependence of the source
0:16:24and the separation methods method
0:16:25but was
0:16:26uh
0:16:28relies and i two different uh
0:16:31to different methods of first i C A which
0:16:34uh is applied to the independent part sources distribution
0:16:38and i C E
0:16:39which estimates
0:16:40uh the power meter or uh the meters
0:16:43and are incomplete data
0:16:46with into that
0:16:47i think you're for your attention
0:17:05you have a a a case of for for more
0:17:07what a very good question i have no application of that's time but i'm interested in anyone having me if
0:17:13there is an interesting application
0:17:16and if uh if i understand well a
0:17:19do you want to estimate a
0:17:21just an independent sense
0:17:23oh okay
0:17:24when you want to estimate the mixture
0:17:27i i by some
0:17:29at the and you only used it depend
0:17:33you do you find a
0:17:34yes i i i i i uh i fine
0:17:37the in sometimes where in depends holes
0:17:40and i ignore the instance
0:17:42the samples where in depends does not hold
0:17:45a a a a you still mix
0:17:47a the other
0:17:49i i i i set that to a when do you have no it mean penn this
0:17:53is to have to so six that a mixture
0:17:56i i
0:17:59uh i don't know how i don't understand you your questions so okay you have to to under lighting process
0:18:04yes
0:18:05and i think uh a in and one you have independence of sources
0:18:08and the know that you don't have to depend S
0:18:10but you still have the mixture
0:18:12in impulse case
0:18:14is i i i didn't understand
0:18:17i i i i have just one product one vector process
0:18:20okay K as one of the of T
0:18:22instant i a different and sent either they are in and or not and i mix them what the the
0:18:27same matter
0:18:30really the in the case where with and then
0:18:32yes i makes them
0:18:34i have a S one of T S of T
0:18:36and i mix them
0:18:37whether they are dependent or not
0:18:39okay yeah i disk one than uh
0:18:42a lot if you could be possible to to all something for my something that case
0:18:46where the the the they are the because
0:18:49you have a also the mixture in that case so maybe some
0:18:52this information
0:18:54uh a could be a not ultimately yes of course is you are some so yes
0:18:58uh i i i i agree if you know more about the distribution then you can
0:19:04probably can perform better i'd just ignore a part of the information yes that's