so a good morning everyone i'm mark to that's are a set and uh i'm getting to present you work with uh i i with sample class you and which machines ski and well or from that to come should but i in so and the title of my tool is an extension of the ica model using that and on so here is the outline of my to a local a first introduce the problem then a a present you the extension that type prepare of uh be independent comp and uh analysis model a start with a recall in the classical know or mixture of independent sources and then i will present uh what i consider the depends sources and lack and variables that that a and this context i will then uh yeah consider uh the problem of parameter estimation in in data and this will be based on a so called iterative conditional estimation method so please see that there are two at an ins ice a a which you all know and i C E which is iterative conditional estimation then i present you the separation method and uh and uh i show you a few simulations uh of in the case where the last and as i i I D and in the case where to not and finally a we conclude my to so you probably only blind source separation or were independent come analysis as well known and the use you the usual assumptions are that's the sources should be statistically mutually independent uh they should be not ocean and the observations are given by a linear instantaneous mixture of the however uh it is known that um independence is not always recall for example uh you may uh re you may rely only on to relation in the case of a second to hold of methods and case but you use non stationarity or or colour of the sources and the case of uh source which belong to a finite alphabet then um may also uh re independence assumption that there are all other very specific cases that uh was started yeah the objective of this talk it is clearly to relax the independence assumption in blind source separation and uh for doing this that propose a new model where a separation is possible a thing and uh where the depends is controlled by a lot so let me uh give you are limitations this is a classical i consider that i have capital T samples that of the observations are denoted by X of to you know and capital and observations capital and sources given by S F T a a is the unknown mixing matrix a square matrix invertible and at each time instant the observations X T are given by the predict eight times as of i am in the blind count takes that is only a X which consists of the samples uh the is available and the objective of a i C A is to estimate the sources know the was it is equivalent to estimate D which is and in as the inverse of a or an inverse of a a a a to someone and be you so here is a figure that probably most of you uh did uh already C uh as as a way to introduce use um to introduce a independent component analysis i consider the classical case with source whether there are two sources S one S two S one is on the how reasonable X as a stream and the vertical axis this is a of the left uh figure um and uh they are independent uniformly distribute when when you makes uh of the of what the matrix a which is given here then you a you have a the observations on the right i project here um the different samples of uh X one and X two a a a a different time instants and to you C basically that uh you can uh separate the sources because you the different slips of or in on of the parallelogram gives you uh the um columns of the at mixing matrix a okay this is well now and now comes then there's uh the central idea of my talk what i'm saying is that if ica sixties in separating sources which and this distribution i should be able to make do something and to uh perform separation if the distribution looks something like that so the idea is to consider source which have a distribution which is partly independent and partly depend and in order to uh try to find another isn't the idea is to say okay if i am able to your race the points in green or to find the which are dependent and a as a sense then i should be able to use classical any um actually be able to use any a classical ica algorithm in order to form a blind source separation okay so the question is how can i model such type of distribution such a a of thought of sources well if you are of its uh carefully we you was you see that actually you have a mixture of two probability density as and whether the probability P between zero and one of the sources are independent and when the probability one minus P the sources are U penn and one possible way which is a classical to uh to model as this is to introduce a lot and a process are of T just hit and to say if or if is equal to zero them the sources are independent if a of T equals one then the sources or depend so here i'm writing the same i i uh in more uh mathematical trends i consider that i have a hidden clues R of T V such that can to show you on are the components of as one two S or or uh independent of this is given by the joint a probability distribution was is she's written uh of the screen and uh of T takes to use either zero or one if R of T is equal to zero then the usual assumptions uh you made in ica appliances of the components are independent uh none fusion and if R T then you have the dependence which can choose as you want which i will process specify like okay that's the model i can no i'm moving to the problem of a parameter estimation well first let me say that uh a you can i i did not say anything about are you have several possible at is to possible of is to model as a or as an i I i T but only process but is uh or or cool zero with to P or uh equal uh one with probably one minus P and this is what i i D you can also consider that are is given by a markov chain well this works anyway the objective is to estimate a set of our me which consist of but inverse of the mixing matrix and or parameters images of or as i told you before if i am able to you raise to green points that this be new are and it well i think you i can perform ica C which it is quite a easy i can estimate the power it is of a or just by counting the number of of utterances of uh zero the number of times or is equal to zero can give me a an an estimate of P i can see that the in sometimes where depends of the sources holes consider only a restricted uh a sub sample of the observation samples and perform my uh favourite i it in other words i i have a complete data estimator that is if are and X are both available i have an estimator or of the clan it is but i interested and fortunately or fortunately leave because a i can give the stalk we are a in in and complete data um context that is X only is available on the R S and now and at this point please note that a a a a a i makes to different um three different I D's is the idea of complete in data we first to the fact that are is no or a known and the fact that a S is and known is uh uh what to to to to that a to say that S is in and i just say that i'm in a blind come okay so here is the second important idea of my talk for in complete data estimation there is this uh quite uh if uh efficient i to perform uh estimation and this algorithm is called ice E iterative conditional estimation and it consists in uh the phone we first initial as the power mean value oh then you can you the expectation of the complete data estimator no i X and you H rate uh a it as that and uh uh you but obtain converge that some uh of the components of a a T to are not computable if you can compute the uh the a posteriori the posterior or uh expectation then you have another possibility you can just draw a random hard labels according to the little of are no an X and then replace the egg dictation well uh uh uh just a some a seven divided by a K which is number of real as it relies H so if you have a look at i see it has quite a we uh requirements you need a complete data estimator here just an ica algorithm will we use the job and you need to be able to you to calculate the posterior expectation or much weaker to draw are according to the new of or no and here in the model i but i propose the the problem is to calculate the probably of are and X well if our "'cause" i I D it's just a even by the base room she's uh written here if or is a mark of chain you can also do it you just uh a need to uh two resort to uh a one in forward backward algorithm which are is you to copy the probably of are no okay so let me one you have uh a a a a a a problem that you may encounter a here is it's the distribution of the sources however you all do that i has some be you to this so when you perform ica uh at a given my yeah i E a iteration you don't new which uh permutation mutation you obtain so the as the uh inverse metrics you found correspond to left or through the right distribution so to you have all this problem we decided it to consider for the ice E algorithm that the uh distribution off uh the sources look like this this is uh invariant by us a permutation and it avoids any problem of course this is not true distribution that true distribution is uh at the top and for a calculating the little of are no X with used to consider that this is this to this to so um just you well i just fruit it's more uh more and more mathematical terms you have a uh the joint density of are are S ask just given by a yeah top equation and the simulated data is uh as a written in the uh column on the the middle and the assume distribution for i C E is a written on the right to you see you have a uh well and is a normal do L is it had not that's a double exponential low and uh we just so much was it uh in the case for the assume distribution okay so no columns my combine i C a a i C E estimation algorithm so you first initialize the power uh i zero and T zero and then you achieve the the the following steps you first calculate the put the posterior of are new X having this do you can draw far tables are uh according to the post your a little of are X there there uh whether the current value of the parameter then you select the instant times where R is equal to zero and you perform an ica algorithm when considering only if these samples where R is equal to zero and you just ignore the part and you're also a the power to P which is the probability that uh what that uh the sources are are or or not a condition and it work depend um in the case of a markov chain is it's uh almost of the same thing you just can have a more complicated step in order to do a power just but it's it's basically the same and you can use any uh favourite algorithm in our simulations we could use J but the other uh the all those you work well so here here yeah a a a few simulations uh you have here the average mean square error on the sources depending on P so the probability of that uh the sources are independent depends P E one is the case where uh the sources are effectively independent the assumptions of i C holes so we have green the case of complete got a that is are is known i perform just a a a a a a a or ica on uh on the restricting a a set consisting of on a uh a a of the of the samples where independence whole so it's a a lower bound of the mse that uh you can tame you have a in but to the case were just ignore and depends to make as if the sources were independent although they are not just apply J and you can see that a a one P decreases is that is one that's also become no longer depend of course J phase to uh to separate the source and in read red you have a uh the results given by my i agree with them which means uh it's is it in complete data estimation and you can see that we are somehow uh we have um quite interesting performance in the case where P is greater than say do you point for two point five depends what you expel here you have basically the same a results the only thing you can use as as that of the results uh remain true one um whatever whatever the sample size maybe and finally a lets me uh out that a you can uh use the same at that in the case where R is a markov chain and in this case you can see that if you uh take into account the fact that are as of of close you are able to improve the performance which is you here we uh we we generated a lack the is going to a of change and a separate the sources either ignoring the smog uh property or uh using it and we obtain better performance with uh using the mark of prob so a conclusion confusion proposed in your model which consist of a linear mixture of source the sources are dependent and uh there exists a lack and were hidden process which controls the depends what the dependence of the source and the separation methods method but was uh relies and i two different uh to different methods of first i C A which uh is applied to the independent part sources distribution and i C E which estimates uh the power meter or uh the meters and are incomplete data with into that i think you're for your attention you have a a a case of for for more what a very good question i have no application of that's time but i'm interested in anyone having me if there is an interesting application and if uh if i understand well a do you want to estimate a just an independent sense oh okay when you want to estimate the mixture i i by some at the and you only used it depend you do you find a yes i i i i i uh i fine the in sometimes where in depends holes and i ignore the instance the samples where in depends does not hold a a a a you still mix a the other i i i i set that to a when do you have no it mean penn this is to have to so six that a mixture i i uh i don't know how i don't understand you your questions so okay you have to to under lighting process yes and i think uh a in and one you have independence of sources and the know that you don't have to depend S but you still have the mixture in impulse case is i i i didn't understand i i i i have just one product one vector process okay K as one of the of T instant i a different and sent either they are in and or not and i mix them what the the same matter really the in the case where with and then yes i makes them i have a S one of T S of T and i mix them whether they are dependent or not okay yeah i disk one than uh a lot if you could be possible to to all something for my something that case where the the the they are the because you have a also the mixture in that case so maybe some this information uh a could be a not ultimately yes of course is you are some so yes uh i i i i agree if you know more about the distribution then you can probably can perform better i'd just ignore a part of the information yes that's