0:00:14and thirty one
0:00:16i
0:00:16i'm happy lasttime applied is due to indicate that at university
0:00:21and
0:00:22and today representing a study on are self disclosure in conversation dialogue system
0:00:31so i haven't recovers also if you can't understand i'm happy i said
0:00:40so this was the study was done as part of the cmu my is the
0:00:44i v and is not exactly in two thousand seven
0:00:50well as in just standing there is close to
0:00:54so it's because human conversations of in humans and y are many solutions to think
0:01:00is right here try to achieve the solution and restart it can be in of
0:01:05all those that so propositional function based
0:01:08i have any information of the conversation those that so interactional con functions with just
0:01:14a system like the quantization for one
0:01:18and so in cost functions which is
0:01:22trying to build that of the essential between those who participated
0:01:26so set motion is one of the key social strategy employed in conversation and intimacy
0:01:32between pocketsphinx and interleaved idea of the conversation
0:01:38so many definitions on the one wants to self disclosure
0:01:42i o one is that strong or mean and it in nineteen seventy three which
0:01:48defined as the one a static
0:01:50all opinions or but experiences references by you and wasn't it
0:01:57so
0:02:02so
0:02:03a no side distortion is a very interesting phenomenon in between very well studied by
0:02:09the psychology community
0:02:11in particular because of
0:02:14it's actually the ability to use reciprocity and dyadic interaction
0:02:19so that you must leave the phenomenon vibrates when one participated in a conversation self
0:02:24disclosure
0:02:25the other participants in most competitive set of discourse in this form
0:02:29and there are many explanations on this but the exact called also based on the
0:02:34screen on that's not what is the one hypothesis is that it's a formal solution
0:02:39extreme where the party receiving side distortions this feels obligated but also said this goal
0:02:46i don't know what concerns is that it's a solution conversational where if a
0:02:51it doesn't sound is close and return they feel uncomfortable
0:02:55and i and hypothesis is about social just attraction
0:03:00ones that is close to people and that's close to that of because i
0:03:05as i do not trust in like a by the exact it's not what a
0:03:12distance between a pretty well established a
0:03:15and right of self disclosure and have been reproduced in many studies and shall
0:03:21to be a very strong
0:03:25so subsequent studies also show other ask exercise caution like
0:03:30a self disclosure reciprocity characterizes initial social interaction because people set is closely to try
0:03:36to corpsman model
0:03:38i interestingly you look at a distance to be high school
0:03:42and actually to be better eliciting self disclosure
0:03:46also is not it your relationship between this process and it
0:03:51so it's not expected that a higher amount of self disclosure that make someone like
0:03:56you
0:03:58so
0:03:59that's has been studied for a conversation between human
0:04:04but really interested to know if the same it's that is close to have the
0:04:09thing i think in human machine i work
0:04:12and if it does
0:04:13that would have
0:04:15implications for systems which came to elicit information from the user point o to maybe
0:04:22also more pleasant nice experiences all the methods you task completion
0:04:28but the key axes
0:04:29two months that machines don't have sort since you things of that one so any
0:04:35set this can also apply machine guns that is coming across as dishonest
0:04:41by the starting point is that night maybe force that humans actually something you computers
0:04:49a solution that so almost macabre thinking about n i the second source you to
0:04:55stick study was split
0:04:56in human conversation the human machine and or
0:05:00okay so i'm
0:05:04we are talking now about the context in which we can you contacted is that
0:05:10so everyone who works in dialogue was how difficult it is together data
0:05:15but
0:05:17exactly in two thousand seven amazon had an example right channel
0:05:22we had a noun in some university students to was channel one on amazon devices
0:05:30and so this was a pretty one because
0:05:36we actually get that uses the real world instead of
0:05:40you know i think something expensive test it
0:05:43so you want one of sixteen dialogue instance that hosted on the alex at the
0:05:48right
0:05:49at this could be able to use the united states of the command that's chat
0:05:54and you that's what i didn't the data and it is seen that i don't
0:05:58issues so they didn't know which of the dialog state tracking with
0:06:03so
0:06:05it looks like interesting because
0:06:07i think it was that's what she to end the conversation at any time so
0:06:11a data and happens
0:06:13you domain so please specify started goals
0:06:16so the only reason that even continue the conversation was for that when entertainment
0:06:21so at the end of the conversation user is allowed only at the interaction the
0:06:26scale of one point five based on the that they would interact with the social
0:06:30want to get it also three shows more anybody
0:06:34so three hundred and nineteen out of the fifteen or one thousand five hundred users
0:06:38decided to anybody
0:06:44and well known talking about the dialogue agent that we had in the next upgrade
0:06:51so that i don't agent was based on a finite state machine architecture and what
0:06:56this means that every step was a of the finite state machine
0:07:00with essentially the response that we wanted to give as a dialogue system
0:07:04and transitions what condition don't use the sentiment
0:07:08so just an example of how this might look
0:07:11so madness aside dialogue it and it's the entire house and green
0:07:16in the users say something more stable be reports to like not but not by
0:07:20then minus encyclopedia anything special happen if response posted but in the user to say
0:07:26something that sounds naked
0:07:28in tries to get a sympathetic response adidas them what's wrong so in this way
0:07:36we had essentially the order of topics of the conversation
0:07:41so maybe firstly the user then
0:07:45acknowledge the positive or negative response then you try to talk about the initial
0:07:49where we asked if they are interested in one of the latest tv shows if
0:07:53they said that they are not interested that we asked them about a nice in
0:07:57the if they say they were not interest in the movie i don't want game
0:08:01in this example intuition that we show the movie we did spend some time chatting
0:08:05about that it is that it is that you see you with the
0:08:09then eventually all users what invited to leon you believe that out just choose not
0:08:14appeared again and again what kind of one of these long winded word idioms and
0:08:18they could also choose to exhibit limit in there
0:08:22now a deterministic which was right
0:08:25i don't need to impose taking initiative but al
0:08:29all from the stage one with a conversation initiator shifted to the user
0:08:34and they could talk to a dialog agent about anything and we try to get
0:08:37responses from
0:08:38other sources from the way
0:08:41so on is
0:08:44about i dialogue into
0:08:47but the specifying chat bots that we used in this study would integer different so
0:08:52we randomly assigned the users who interacted with the system
0:08:56one of two chat bots or one of them i think in c is a
0:09:01very high self disclosing chat board from the beginning of the conversation
0:09:05so in that uses when the machine that's how's it going in the user sees
0:09:10not i'm not the channel response with the story about itself with this kind of
0:09:15your because
0:09:17that's it is directly related to there's been chilling a to b and catching up
0:09:21with my friends they just got them and whatever you an expression that they shouldn't
0:09:24one
0:09:25and the machine i mean of the dialog agent does not really had a frame
0:09:30is often to work about it
0:09:32but
0:09:33then the humans experiment to see a play today i quite enjoyed
0:09:37by a group of humans
0:09:39the see a dialog agent that did not end of story about itself it is
0:09:44c is all that's create anything a specialization towards technology and is the next question
0:09:52so now on this was the setting under which we
0:09:58and i'm not experiment
0:10:00but not only interested in identifying when you with the we use i with exactly
0:10:05that are dialog data
0:10:07actually said is close and stick with defining all what we consider the self disclosure
0:10:13so in the context of conversations with a dialog agent we said that self disclosure
0:10:18has to be wanted at and it should be information di otherwise the dialogue agent
0:10:23should market
0:10:25so this does not include non-systematic questions for example so in the example given
0:10:33so if you see when a system see what we collected anything special
0:10:38and the use this is nothing that changes going on physically with my notes today
0:10:41so that what constitutes a disclosure
0:10:44but what exactly fifty times in the movie a time to learn the user sees
0:10:49i need identity of a bit bigger block on dataset disclosure because it is adjusted
0:10:53it it's possible question and that you have any extra information
0:11:00so in this paper a three hundred and nineteen conversations were labeled for a user
0:11:07self disclosure
0:11:08and we manage to get a substantial agreement on there
0:11:14by
0:11:16we actually had a much larger corpus of conversations which it was not possible
0:11:21human annotators to allocate every user utterance for self disclosure
0:11:26so what we do this we built an svm classifier
0:11:29are trained on this corpus to be able to the entire corpus for all occurrences
0:11:34of the service goes
0:11:36so this is the things just is designed to say one going to justify
0:11:40but the classifier or a accuracy of ninety one point seven percent and or
0:11:47f one score of sixty seven percent so it was fairly a little bit
0:11:53okay so now we had
0:11:56the user utterances which
0:11:59one instances of self disclosure because we got a classifier to label the whole corpus
0:12:04i mean or when a machine is close because we designed this is so now
0:12:09that's allowed us to study the effects of status close to
0:12:12so are the first one if you want to start with lexical
0:12:17so
0:12:19what we did this we studied
0:12:21in how many users are disclosed and done before to sell disclosing user utterance
0:12:27is the of the machines at school and we note that users was significantly more
0:12:32likely to set the schools following of machine that the solution
0:12:36then when a machine didn't set is close
0:12:39you know at a door immediately following the for instance of machine self disclosure
0:12:47and we found its users were much more likely to set it is close even
0:12:52at the beginning of the quantization of the o
0:12:54matching after a system set is close
0:12:59so we set our study other question
0:13:02the project and a conversations but initial user self disclosure actually longer
0:13:08an internal yes there were a they were significantly lower
0:13:14we also want to study if you that's what is close initially do they disclose
0:13:19of the conversation and i don't know that if someone doesn't so close initially
0:13:25much less likely to sell it is close to the correlation
0:13:30so
0:13:32i don't know question we started was a user will choose not to sell disclose
0:13:36initially estimated as in just employing machine interest and this is kinda based on the
0:13:42motion of people having no god it was not
0:13:46so the way we tested this was a bit a reference to i would do
0:13:52so we are able to set was initially one more likely to fail to one
0:13:57and only if they do play the work in for how long deeply working
0:14:03so yes a the users who a source not a set of is close initially
0:14:07what actually much less likely to pay to what they also if needed in the
0:14:11volume deflated what a much shorter time than
0:14:14a user's we chose to say it's close
0:14:20next we test the effect is close right in the likability in human
0:14:25machine interaction
0:14:27so since avalon provided us with sleeping of well with the people for a given
0:14:32by the system again we use it as a proxy for online okay
0:14:36and we thought we had a conversations make you would sell disclosed a lot i
0:14:41mean the delayed of what vector
0:14:43and
0:14:44the and that is we don't know
0:14:48so we wouldn't really find any correlation between over the user ratings and the a
0:14:53mole this to disclose in the conversation then and we couldn't find a difference in
0:14:58the ratings of the statistic located at what's as both within
0:15:02and he couldn't find a difference in the ratings for conversations which have high service
0:15:06also insist on with patients with discourse
0:15:12and so it and a few movies
0:15:16by we study the effect of set is close in a real-time that skin spoken
0:15:22dialogue system give model being users in the real world of amazon alex that thanks
0:15:28to a amazon
0:15:30and what we found is that indicators of reciprocity have been even in human machine
0:15:35quantization
0:15:36and that
0:15:38by the way how with are authentic as close as efficiently we can characterize the
0:15:43behavior while the quantization
0:15:45well what we also identified anything at relationship between san exclusion and like
0:15:51thank you
0:15:58first
0:17:04the question so it's jules so well they're not aspects to self disclosure to adapt
0:17:11of self disclosure elements of self disclosure and the study considers the only by at
0:17:16o
0:17:19future work would be to do this i'll probably but to do both positive and
0:17:26negative as well or two of the data but the on sentence that i don't
0:17:32more better we would find or relationship between self disclosure and liking
0:17:37because i even in psychology it's not remote that
0:17:42so that was the study in nineteen seventy three i think about what cost
0:17:47would say that
0:17:49so he divide itself is closer to three categories do medium and high essentially
0:17:54and he found that and one set is closer to result in collecting but i
0:17:59said distortion actually resulted in this and i think
0:18:02and the reason is that i don't trust
0:18:04people who say disclose very high
0:18:09so that maybe
0:18:10and interesting finding
0:18:16was
0:18:29i even like you an extra a i think that someone we gotta because they
0:18:38have a block so well
0:18:41i don't know anything that alex is hopelessly but an x and you might be
0:18:45ignored
0:18:47but they are this also five shows that people will be back to what we
0:18:54so that also are similar
0:18:56and are most of all black students were actually
0:19:00the titanium believable that the machine could have anything like but
0:19:05no effect on very clear people responded that wants to that's the
0:19:10the actually believe that the machine thing
0:19:21so we don't have instances of battery hundreds of people
0:19:26what we did not particularly if they were different from the a believable back to
0:19:32you
0:19:33really of the reciprocity affect and reciprocity effects or and we would like to these
0:19:40will store
0:19:45first
0:20:25so great question initially our work but only the shaded up to the
0:20:32and we didn't really ask questions but then people never school us because they just
0:20:37i know what to talk about rip of order
0:20:40the initiative to do this architecture may be kept asking questions and we might get
0:20:46one and i started do not include a direct responses to question
0:20:49so even something like are not clearly a i even things like i did see
0:20:55the movie we don't count those that self disclosure available they technically are giving information
0:20:59because they're
0:21:00just on sorting the question but the bare minimum of what require
0:21:04so we only consider things with a probation it wasn't necessary as a as close
0:21:09to
0:21:10and that of the data collection
0:21:14last
0:21:32do you mean that is us
0:22:05i think there had so the people are currently work
0:22:10okay
0:22:12okay i think