okay

so i'm not talking about understanding the use a user in social but conversations

and i really representing

the a team of students here so i wanna knowledge the students are really a

part of this

i also work we enjoy denotes the and that's faculty advisers

and that it's

it's been a lot of fun and the students i really

okay so it doesn't know about amazon like surprise and so i should point out

that name they are here a sounding board and then i don't think it is

again something

and is an important e

was a response to a caller competition

which was the l x imposed on a lot surprise and so the idea

back in twenty sixteen base elicited skull and they want university students

to build a social but i

and well as it is also but to converse quote unquote coherently and originally a

with people on a good topics inference and so it's very open domain

so my gratitude on with the team leader said you want to do this and

i think you're crazy but okay

and he got it together and they wrote a proposal and the

and the intended to select and then we can then the field of the system

and all that

at the end of that we had about ten million or more than ten million

conversations

with real users

and between that

and the fact that we're working with the new type of conversational i basically what

we where it is there is a lot of research problems in that is based

dialogue that it is i hadn't thought of before and so i'm the focus of

this type is gonna be understanding user

a particular including user modelling but i want to start out by saying this is

used once all these the overall big picture i'll give you a little of those

probably picture is just one small piece

so what it what i mean by social plots so and why do i mean

by think this is a new type of conversational ai

so a lot of work in conversational ai has two spaces and people often talk

about it as two different possible task

so there is the virtual system and what task oriented dialogue

and in that

type of dialogue system

you're executing can we have the answering questions it or something that is social back

and forth

on the opposite end of the spectrum is a chap which is oriented towards chitchat

kind of how are you know what you're doing today but it really limited content

to talk about

i like to think of these not is to different option

but as

a two different types of conversation you know broader space has at least two dimensions

probably more but there is the accomplished task dimension where the virtual assistants trying to

do something in the chat but is not and there is a social conversation dependent

where the jackpot is being social but doesn't have as much to talk about

so what we are trying to do us something that's in between

we do we're a little bit less social and a little bit less

a task oriented

then the other two

well i i'd argue that it is to some extent

a task goal oriented because you're providing information

so there's some so most social exchange and information so with that background

what i'm gonna talk about

is initially that then of the social but for our chi specifically and that is

that the conversational gateway

and all of us system overview i'm gonna kind is true that because this is

early days of working on social but that's and the architecture with you is not

gonna be the our architecture that anybody'll use a couple years from now but we

need to understand it to see how we're collecting the data and what we're doing

then they want to focus in on characteristics of real users and this is just

an analysis somewhat anecdotal but i think it's important to understand where we're going and

then i'll start in panel talk a little bit of our first steps in user

modelling and out in this was something queries

okay so this is also by as the conversational way

so what

we see

is that this social but when people come to talk at social but they are

not they don't have a specific task that they wanted you don't wanna work

a restaurant reservation for example that they do you come up with some sorta

ideas of what they might

and yes or conversely

and they were new information and their priorities are interested in a of all their

goals available

and so the social but is still indicating that a balding

so one of vocal set

the users are also in this case

coming to a little a little device

to talk to okay that our accessible dot so they know they're talking to about

we are not trying to pass a two pass into i

i would argue that users should know that they're talking to a box and so

making the lasso human like as to what the users may not be such a

thing to do

pretty much the systems

i know that you know in some

for some people channel after a little controversy all this is not a chat but

i think there really are applications for this

for example you could imagine in language learning having a conversational agent that can converse

was which is a good way to practise language tutoring systems a good way to

interact with learning about information at their own case with depending on your own interests

you know

are you we're using

a chapter information exploration interactive health information recommandations and just to give you a nice

you have how you can imagine that so when i come home i actually use

the my i'm not a power user but i actually use my

why alexi the and often times when i come home i want to listen to

the news well i'm at dinner well you can imagine if you could interact with

the you could tailor the news to the stuff that you're actually interested in

and that there is the notion of an exercise coach or your coach so we

end up teaching conversational a high course

screenrecorder a building on what we have learned the teams of students to read and

there was a great coaching a i system that

one of the student teams bill so a lot of actual applications like think this

technology can lead to and a lot of people are shown that interested in

okay so are you is that it's a conversational gateway timeline content so again when

you get when o

you might want to talk to a the in a rat we had your system

to learn about what's going on in the world

and in this particular case we're scraping client and the contact would be a new

source it is it could be video could be well actually no with all text

was so it's new sources the could be whether we're not using quarter of we

use a and b

we read from a red it's the discussion for that could be so all of

the stuff that's online you could interact with

so just to give you an example or even by

this is an actual dialog in all examples i'm gonna give you are actual examples

and exposing a lot of our system

so in the first case you have to start out with you says let's chat

that evokes a system because we're supposed to be anonymous in the competition everybody was

required just a this is the know what surprise social but and then added that

can just go on a chat and

you have to chat about topics you have just play games and chat about whether

so we are for or something

somebody will accept the

they will talk about that and try to leave the conversation for thirty eight somebody's

not saying too much

and so far in set with this case we're talking about movies and we might

talk about a director or there were we might go which you that sort of

like so that's how the dialogue going

in the beginning i'm showing here

a recognition error the person after house or a person says that alright reason we

can get that rat get that the answer responded that correctly is because actually we

have n best alternatives

and so we could do you get out in figure out based on probabilities and

based on the actual context responding to house k at the present actually said

okay

so i want to highlight why this and how this type of social but is

different from a virtual assistant that has much more research

well so that have i use that is a sort of conversational ai cyst

components and even if you're doing and to and

you're and i and would sort of rebuild we're often different stages that maybe training

and the and reduce different stages of the speech and language understanding

the dialogue management response generation but also every system is gonna have some sort of

backend application that you're interacting with

so and a virtual assistant

the speech and language understanding is constrained domain

can be and easier task you like task intends

oftentimes you're filling out forms a binding constraints to resolve with the person wants to

do on the social by the end

are more social

or information oriented i want information on this topic

so the entrance are a little bit your french

and in terms of understanding at the sentiment is gonna play a role

the dialogue management side on the virtual system you're trying to resolve ambiguities security and

options to figure out what's the best solution to this problem

and then executed task

and the roar would be timely completion of the task

a lot so i

you're actually trying to learn about the interests of the user

and the suggestions at least in our system but that's information oriented you one make

suggestions of things that might wanna hear about

and the reward is user satisfaction which is not so concrete

and that's very challenging

the backend per a virtual system with a and b is structured database are back

and is totally unstructured so we have data structure

and lastly because it's a constraint or maybe virtual assistant response generation is you are

then in our case

which is an open-domain because we could be presenting information on in there

okay so let me tell you a little bit about our system

and i'll give into a how we

the velocity cut it a little bit overview and then evaluate the system

so

again this is the new problem when we started we had no experience with the

lexus skills we didn't have our own dialogue system and

using their tools

well as it really a good solution because it will for designing for speech or

fine

and that's not what we were doing we're actually doing conversation

as opposed to you know the form filling task oriented things that people have designed

apps

so

that was a little hard

and find that

there was no data no people often chair challenge is of that no amazon had

data they just they should have given that you know there was no data amazon

did not have data they had interaction straight and transactional interaction like such a kitchen

timer

you know plane using

they did not have conversations

this was one of the reasons i'm sure this part of the competition

and i

after

the performance or so getting the data from other teams in the recognizer recognition error

rate went down a according to them in a paper three percent

so i really didn't have the data

so it's a new it was unusual you're a new probable and what that means

was there's no existing degraded entering so we started out thinking that's what we would

do when we started out with do we present a sequence modeling with whiskers it

doesn't work

because it's all data

so we have read a yes in terms of starting from scratch

i think

because we're starting from scratch our system was you see that in

so that data that we collected in the beginning

you know was good retrain your recognizer

what was not so good morning how to improve our system

so this is all the say we're at the beginning the system wasn't so good

it of all it had to well okay so that setting the state probably to

the system design

alright so we when we first started building a system we first started getting data

we realised have that it was we side effect okay what we wanna think about

in terms of designing this just

so

i think that people what makes someone a good conversationalist

so you know to a perceptron and you looking for people to talk to you

generally want to talk to somebody has something interesting to say

okay

and how we also want to talk to somebody listening to you and

joint we are interested what

you have set

okay

the principle seem reasonable to apply to a social but and in fact i think

they really work for us your some examples

so

we saw that users would react positively children something you will tell you later how

we have got that information so for example around christmas time

a people what like to talk about christmas and we in calling our content have

undefined

this little tidbit space accent beer ingredients to the international space and station just in

time for christmas and a lot of people that was kind of interest and they

like that piece of information data and also like sort of

cool size of our a lot of the users are turkeys and so they like

the fact that babies as you are ten months get is that how much someone

values a particular goal

by observing how hard they are willing to work to achieve that

i interesting people that was interesting and like that

they do not like all news

so that we had a fixed that problem really early on we tell me something

that's two years old that gave us better use

the also didn't like unpleasantly then it you know it turns out there's a lot

of bad news in terms of current events i mean that if you're scraping you

will get plane accidents where people die and things like that

so we started hearing or and you are visiting us that reactions

but filtering is really hard problem

so we can filter for people dying but we are a piece of news that

people really didn't like was something about cutting the dog's head off so that's really

unpleasant we wanna with that

so another thing that we want to try to do show interest in what the

user says of course they're gonna lose interest if you're not

if you get too much stuff they

that you don't want to talk about they wanna get acknowledgement

something that's really working in these conversations they need to get encouragement to express their

opinions does not be used to this

so we ask questions like have seen superman

it's layer

which part did you like best

so that's important part of the dialogue

and fortunately to ask questions you need a little bit of knowledge of the work

so you can ask seven standard questions about movies but once the domain gets brighter

we might ask questions like this article mentioned google have you heard of

yes

i generated this happened to us in the demo

unit we we're doing this averaged ml so in this case you know everybody last

but sometimes you know what are the actual uses a gets annoying

alright so this leads to our design philosophy of just summarise briefly

we're content driven and user central

so we had to do daily and i need to keep are

are information price

so we had a large and dynamic content collect collection and represent with the knowledge

graph

and dialogue manager that promotes popular content and diverse sources

or the user centred side we had language understanding that incorporates

sentiment analysis

we try to learn a user personality in the world around topic changes and tracking

j engagement and on the language durations so i

we tried to use prosody appropriate grounding

so

this is the system and i'm not gonna tell you everything i'm just giving you

to the lecture but you can see is a language understanding component dialogue management component

language generation there's this back and where we're doing content management

we're using and

and question answering system that

in this are provided

we're using not expert we're using eight of us for

some text analysis

so that's a big picture there's lots of modules because we're at the beginning stages

were constantly swapping in and changing things

and enhancing things so it is a modular architecture to be able to about the

rapid development

so very quickly aren't each of the different components

natural language understanding is multidimensional

we're trying to capture different things some responses can be long and in capture both

questions and commands

we have to cut taxes topics that people are trying to talk about and the

user reactions

the dialogue manager is hierarchical l

so we have a master and minions and the master is trying to control the

overall conversation negotiate and right topics to talk about

thinking about coherence of topics

engagement of the user and of course it's important to the since work on trent

content driven

two are considered content availability you don't want to suggest talking about something that you

don't have anything to say about it

the minutes it'll are focused things

for related to social aspects of the conversation and different types of news sources "'cause"

different types of news sources

or information sources

come with different types of

metadata an extra information so with movies we have relations between you know actors and

movies well for a general news source we just have the news and the metadata

about the top

this is

back to the example it you before

and in this example there's stages of negotiation and that would be handled by the

master and

different types of information sources that were jumping around the n

that are handled by the different

go so are different many skills so the movie is one skill

we great from a celebrated that skulls channel is

and so that the last hole

those often are willie

and then we also sh great from another source it's giving us a and that's

we're that job you're between skills

and the language understanding so i

basically we get dialogue acts

for

the dialogue manager and we get information that's to be presented from the dialogue manager

and the response generation is gonna take those internet into the actual texture got it

you're gonna say that includes a brace generation but also prosody adjustment

the tricky thing for the so for the things use a lot you just the

prosody in the speech synthesis

so we have no control over audio but we do you have control

i'm using s m l

so you can

make your

i'm like enthusiastic

of which you have to do with the prosody instead of having the above three

d

intonation

by for the is that we present in

news we actually just read as it is we rebuilt or it to get things

that are covered more conversational

but we're

but that's text

pretty domain and that's really hard to control prosody for

actually we also do some filtering in the response generation which will see later

content management has this end we crawl online content

we have to filter inappropriate and depressing content

then we index to index to using some language some parsing and entity detection

we use metadata that we get from the source

for topic information but also use popularity metadata

and then we

good at all into a big knowledge graph our knowledge graph and eighty thousand entries

and three thousand topics so in and you can have multiple topics

so here's a idea

so we would take for example over in

e upper left inside

are a bunch of news article or

bits of content that mention ut austin over here it is a bunch of things

that men mentioned google

et cetera

okay so the system is evaluated

by what amazon decided and basically that was really one to five user ratings that

was the most important thing and then in terms of the final that there is

that i it it's duration the ultimate goal

if we had made it to twenty minutes

with all the judges then we the team would've gotten a million dollars

so we actually did really well

i didn't expect us to get your five minutes

so ten minutes was pretty good

it's a hard it's really hard problem

but the interesting thing is the other judges that we're not so all of the

development was the amazon users

but they are three people for interactive interactors and three people for judges

where for the finals and they were people who were motivated to improve the system

people who were like news reporters you're conversational it is

and so the motivated conversationalist

actually last a lot longer than the average amazon user however there are more critical

so the average amazon user divas higher score so that's basically how it works

so what we

i actually pretty balanced and

is the average the amazon users

but the rating is at the end of the conversation

you have a huge amount of variance

and some of them

declines rate is actually more than half of them inclined to rate the system

so the ratings are expensive noisy and sparse

and i haven't that

you can have you know we're not occur between the states we get word sense

this then you in a weird sense ambiguities can lead you to do something that's

off topic

and so you can have guide conversations you can get is i can get that

depressing news you can have sections of the conversation that are working well

and sections that don't work so well

so you're or a score

is not a equally representing all parts of the conversation

and so in order to actually use that overall score

to meaningfully do design

we have taken a and then to the fact that users give us more information

they actually accept or reject topics that we propose

they proposed topics

and the reaction to the content is important

so what we actually do

it's we take the conversation level recognition and we projected back to dialogue segment we

can segment just because we know the topics from the system's perspective

and we project that using the information of user engagement

so you could be projected that non-uniformly

and once we have those segment level estimated ratings

then we can aggregate across conversations for example we can aggregate across topic we can

add aggregate cross specific content

or we can apply across eventually accurately aggregated cost use it right

so this is how we could figure out a this is the content a lot

of people like

this is a constant a lot of people don't work so that's basically it

so what i'm a bunch of the user's task just some kind regarding constraints

we could not you

and i think we have a audio side

so speech recognition

all we got is text we get an audio for privacy reasons

asr is imperfect

we don't get any audio so we don't get

pauses we don't have sentence segmentation that's been changed in the version but we didn't

have that

we don't have intonation so there's a lot of things that we can is

detect

and this is it we can do u s and all but that's all we

can do

so there are some constraints so that just to say

a lot of the errors are false alarm errors are all gonna show you have

any examples you can appreciate

okay so i'm just several conversations

so what i wanna say here

is used some observations and then talk about personal implications

and then all the three of these i can talk about the user modeling

so

there are

for dinner points and wanna make for users have different interests

they may have opinions on a different opinion on the thing is a

and use were example in the us

news about from

little is the whole or opposite reactions from users

they have different senses of humour

some people like our jobs and some people don't

there is they have different interaction styles different well and they're different ages isn't of

family so just a you example how this impacts the system

one of the things that we found

was people like to talk about vampires for some reason

so this was the piece of information that a presented a lot to people and

that

basically says did you know that relation vampires are tiny monsters that perot into people's

heads

and for some the talk about that

now we don't control the prosody on this because this is general content so it's

basically read prosody

and so when people are listening to this

if there actually listening they are often amused as a kind of an so but

sometimes

they think it's

a bad

okay so they're not of

or

they what they had

because this is didn't make sense to them

i times you can tell they're not really listen

so far well

citrus

but last three

there are a user community is a little more complicated

there are also the callipers

and so this would and

resulting in topic changes for those people like that

they are different interaction styles so this is one user

talking about vampires all kind of this was i useful user i'll come back to

this for other examples

and then we know that she user which is actually more frequent category

where a lot of the answers when one word so

this is important to appreciate that it affects

language unit

so

the a type of user actually is a lot harder for language understanding

because

there are there is more recognition errors

we're not

you know it's harder to get intent

this type of user actually is also hard for language understanding because

we don't have prosody

so what it's saying no in a way that

so it

if i ask a question

do you want to hear more about this and the person says no

that means they do not want to hear more about this if you a request

if you say something and there are a lot and pairs as know that

if you wanna hear more about this

and so it's important because we don't have prosody

that we use state dependent dialogue and language understanding but even that doesn't always got

it

so this is my argument for

right industry we give us project

okay so they have different calls to the information seeking goal

the information some people just generally want to know more others ask specific questions others

is really hard questions like why

i there is

a like maybe empire percent

well i'll laugh are on and start asking a relevant question to the topic of

vampires

but not

and the user to call or the that are that we were talking about is

it really true that are like tv vampires and then there is a speech recognition

here

opinions sharing

some people would like to spark a lot like to also share their opinions that

actually not so hard to deal with because you can you that is you might

in a party and not in huh

and then there's other people who want to get to know each other they want

to find out

why a lexus favourite x is tell us about their favourite axes and so those

are different levels you have to accommodate

a we also have an adversarial user is we share suppose three family friendly

if we do things that are not in we currently we got taken offline as

this is really in the field

for us

did not use everything

so we did not wanna get taken offline

so we work really hard and we did many times we worked really hard though

to build content filters in the come up with strategies to handle adversarial users

so in this particular case we're not supposed to talk about anything

related to pornography or sacks or anything like that

so you just but a lot of users so you just have to have a

strategy for dealing with that so

in this case

we just tell people are much as well

when they have a sense of language one time we got taken offline because if

you didn't understand what they said sometimes a good strategies to repeat what they set

and that

and so what we were doing this we were filtering all the concept we were

presenting but we forgot to filter what the people's

so our solution there was to take the babble heard and replace it with random

funny words is one of my students came up with this that i would never

i thought it was a really stupid idea but it actually people laugh so it

people really liked

so we say things like unicorn i imagine you record or it's actually more funny

if it's in the middle of a conversation and its you know butterfly open your

whatever it is

and change then we change the subject and then there's a lot of people who

manage and control it just have a strategy

which

i don't understand or whatever okay

the last problem is working with children and you have a lot of children and

problem one in working with children it is a that speech recognition just doesn't work

as well for young children everybody knows that

it companies

have included some stuff to get it h o ring age found in him to

lower things but really young children it doesn't work as well

i'm quite sure i'm looking at the and bass

this is a kid talking about the pet hamster but other than that it's really

hard to figure out what they were talking about in this case asking them to

repeat

is not gonna solve the problem it's better to just change the topic

they think it's content filtering

so when you're talking to a kid at christmas time

a lot of times in the us a lot of people want to talk about

class

fortunately a lot of the contents

that we were scraping from was or at all

and

we take it also i because we set sail a class was a lot i

another concept sequences that

i we were not only people this i two is that would so ever saw

that what points that start talking about the other class

so results actually

okay so we have a user personality not well

it's based on the fly factor model that's based on the we ask questions based

on this two questions but the real world readable more conversational

weaker ones and things questions that we don't actually used to make it

more engaging the people but we can ask human because this is the and the

interaction where we're supposed talk about topics of people to want to just

you know do you with all sorry

so the data we have is very noisy and impoverished we're not asking that many

questions

buy tickets is it doesn't give us some information so what we can see

is that personality for the things that we explored

thus correlate certain types of personality correlates with higher user ratings

so people who are extroverted

agreeable

or in haven't you give us high ratings okay sort of make sense

i think that's interesting is there is a statistically significant correlation

we owe personality traits and some of the topics that they like

you know not for the topics a lot of people use

not everything but there is system

this correlation

for certain types like kindergarten actually hurts

there was the that data seem to be pretty good some extra perks like recent

fashion introvert like a i-th routing task

if you are and imaginative you like

and you like things like a i've time travel anyway and

low conscientious now as was explained as you know you don't like to in your

home or a and those work with those people like pokemon be in one craft

so that data actually sort of it sounds

okay so just summary here

the implications are that

age and dialect

that the implications are the user characteristics okay every single component of the system

that age trial are dialect verbosity a pack language understanding your interests the fact that

dialogue management and the types of if you're you talk a lot more errors that

affects the dialogue management strategy

you're interested that content management

you're h does because of how you filtering is

as we begin to user modeling we wanna multidimensional content

in that so we can get ratings the different user trials

and lastly the phrasing that we use the generation

if we have more information about the user

should be adjusted based on

so a user modeling

this is really early work

so this is a preliminary so nothing public but i thought it would be under

talk about in this audience

so i'm gonna talk a little bit about why we care for content nanking and

the user but in future embedding models

so

while we wanted to the task that were interested in and is given a particular

the contents

a project whether the user is going to engage positively or negatively

or slowly with that content

and so the time span is gonna be characterized in terms of the information source

topic entities

at some point later sentence and valence but we haven't done that yet

the user engagement is characterized in terms of what topics as the user suggest

what topics

what does the user accept or reject

positive or negative sentiment in reaction to the content but also a positive or negative

sentiment in reaction to the ball

because that reflects an overall being unhappy with content but maybe not a specific font

probably generally

so the types of features were using

include both some user independent stuff that's like the bias term

so relatedness the current topic and general popularity in dialogues

but then the user specific features for mapping these different types of measures of engagement

into a few additional features

and then the work trying to use the light cues

the user to capture things like age personality

not the issue here is

we have very little data so we don't know

we have to treat each conversation independently conversations we know that no the conversation came

from the same device

but these devices are used by families and oftentimes use more than one person so

you cannot assume that

the person is the same problem

conversation to conversation

for specific device

in the future you can still have that information but this is we have to

use only a conversation

so that it is very sparse

so you have to learn from other users

so

just this is just a motivational slide

this is just say that the user is really important so when we're predicting the

final rating of the conversation if we consider topic factors

i didn't factors and user factor so topic factors are what the topics or the

topic coherent stuff like that

who was it's just by the agent that there is there are things that the

agent's is

how they say that and then the user factors are user engagement and

the robot's them and things like that

user factors are alone

you better performance than everything together

in predicting the final conversation level so the user is really of work

okay so

i do not mention neural networks except to say that we didn't you and training

so i'm gonna now mentioned in that it doesn't mean that in fact are used

because everything has to be passed et cetera but we are using them in terms

of finding user embeddings

so the first thing we did was actually not be used a neural network

well as latent dirichlet allocation

a which is a standard way to do topic modeling that works modeling for any

task

so what we're thinking about what we think about this is each user is a

bag of words

and that would be a document like a documents

and we're gonna come up where represent lda the clusters the different what about topics

of lda would be user type so unsupervised learning user types

so we just had to just do let's just use hand what topics or clusters

because we don't think there's that many different user types and this would be undercut

somewhat interpretable

and that if you look at the most frequent words

you the following phenomena

people who like interact with certain types of things the people like to know what's

one particular cluster people talk about music was another particular cluster

and the personality quiz

like this

shows that another cluster

interesting and

a lot interest in the let's the

shows that

and another cluster

interesting you know be oriented so with the legs what your name what's your favourite

but analysis self oriented person i think i am

there's people who are generally positive

a whole one interesting

and there's people who are interested immediately

so that i l

it's so first of all you play traffic the lda in order to get the

interesting interpretable cluster you have clusters you have to do some frames

you have dropped frequent words it turns out i really that we needed to keep

yes and no in there is a positive people and negative people

but because you get yes no a questions are just so i'm gonna get those

in there

that you have to for them out

so uniqueness to make it work and there is you know there's this class and

i have fundamentally in a perplexity of that's what we're doing

without is that the right objective take your right users

well trained on another a problem we played around the different objective

to learn user embeddings and this was user we identification this is also unsupervised

and then it is

you're gonna take a bunch of sentences from user and bunch of other senses orchards

from the same user

and try to learn embeddings that make those things from the same user closer together

and things

to a user

farther apart

okay so we have

distance to sell

we want to minimize

and distance to others we're gonna maximizes it's a minus sign

so when somebody's talking about tasks and they keep talking about task

we want those to be close

and when they talk about something totally different that's gonna be five away

that is another way of dealing with drawing up things

so

if we this work was actually done related

and we have this problem where we're gonna let cid each and what are you

serious and i say finally somebody else like that

you from their tweets

so using this unsupervised learning which we call reality it turns out and you're picking

in from forty one person in forty three thousand random people we evaluated with mean

reciprocal rank

so basically the mean rain

with our best

just which was initialized with worked about

and then use the identification is twelve that well at a forty three thousand is

pretty good

lda is a five hundred

so this type of user adding i think is very promising

very for dealing with learning about user types

okay so how do we evaluate them that's with this task of embedding channel project

engagement

and a conversation level ratings x

okay so in summary

the a unit summarize the sounding board stuff and then the user stuff so basically

the social by

as a conversational gateway

involves not

accomplishing tasks

i in hearing about helping the user of all the goals and collaborating to learn

interest

and the user what the user is doing is learning new fast

exploring information ensuring opinions

so that the end of your conversational a system

the radical system components are basically related to the user into the common to tracking

the user intents

and engagement

but may also managing and evolving collection of contents

with you can think about a social chat knowledge

and as i said in the beginning

million conversations with real users and this new form of conversational it i

least menu problems of this is just the tip of the ester

okay so the sociological asr that user group that information exploration

re a user variation so

you know i'm sure that either conversationally i get a lot of user variation but

it

but with a lot

understanding the user involves no

not just what they said you can send that but also they are and lastly

that use amount has implications for all components of the dialogue system

and for evaluation

so lots of open issues this is the typical shape of the iceberg

a user and reward functions dialogue policy learning

user response generation and so we have a context where language modeling that use the

user model as an input

and rate for user simulators of those times the things you could do we haven't

started out and you have this

but well as dependent the word function we have anyway it so it's a at

this platform for language processing research and that i will still

so that is stuff that i know best about and they're definitely where other people

you

participate who are interested in user modelling

so wouldn't be so the system we feel that had no user modeling this is

the coast

value that this is close to competition

using our data

okay

so we had no user modelling and they're

we didn't have a the detection of engagement and the personality stuff

and we did that actually started with you we use personality to predict topics

so we had a little bit

but not the not about it

so there were other people interested in user modelling i don't know specifically what it

what they did

the presentations that were so i know more about the three finalists because of their

presentations

i

don't think there was

much using modelling and in that

so

so i would say i don't know as much

more of that was so we did less the

trying to use reinforcement learning and that sort of stuff because

we just that we don't have the day

so the people to more of

that approach so i think there is a difference

in terms of the silence of the approaches

i you know when and the thing is

everything is important

so you know button most important

you know that i think the user modeling definitely the user

centric stuff so that the thing is in terms of being user centred we will

change topics quickly

and to if things were going style

so i think that helped us i think the prosody sensitive generation process

but i think most importantly having lots of topic

contents

interesting content

helpless

but you know the other stuff that people did it probably would have helped us

if we had incorporated it just that was not always some time

so it's hard to compare what was more important

across teams

exactly and that was indeed the strategy

i

i agree and so we don't do very often

so what we did is we had

a series of strategies

for when we didn't understand what the person said

that was one of them

we also have the strategy of asking

for repetition

we also have the strategy of saying we don't understand

so there was

i think there is at least five different strategies

we would cycle between with some randomness a but also some use a the sentiments

of that percent to figure out

the detected sentiment to figure out

which to prioritise

tobias

between the different strategies so our way of dealing with it is to sample between

different strategies

there were actually at least one t maybe more than one team that actually used

a lighter

and incorporated adam's in the same way isn't it wasn't like our are many skills

with a little bit like harmony still so they allow to take a shall use

the lights the slot eliza into the conversation

we did not do that

we just had that as one

particular strategy our own implementation of it

very few

but people do assets order to take

and

the ask questions that are a little bit more difficult so that's the that's like

the why question

there you people do that that's really hard you don't have a we don't have

a solution for that right now

more or and

they'll ask them or slightly more specific question

and we can come up with not a great response the least

better than i don't know

the thing i don't know when you say what did you find interesting is a

it can valid but not great response

wonderful question that you are asking that question they are not and because we don't

have the prosody we can't tell

and so a unit at different version of this talk i those examples and it's

very frustrating

you will know you would have a i mean prosody analysis is not perfect right

but you would have a much better ideas so you could it would be easier

to get sarcasm

no request

so are now natural language generation is not at all sophisticated

that's an area where i would definitely want to improve it's just

in my own mind it's not the highest priority so when we were generating the

content

about you know the news of the information or whatever it was

basically you take what we got from read it and we that with minimal transformations

so there there's transformations to make it

shorter

there's transformations to there are some simple things

to make it a little bit more suited to a conversation

all but mostly things that are really not suited to conversation we just for well

so that strictly just

the wrappers around the

are generated but that's fairly straightforward

so this is an area

that

we could do a whole lot better

so the knowledge crowd

basically provides links

we we've it's

man it's on you want the details the actual technical details

they use dynamo db on the amazon cloud stuff and i can point you to

my grad student how we do that it it's really important because we have to

handle lots of conversations when we're alive we have to handle conversations all over the

country

so everything had to be super efficient

within a conversation you have to respond quickly so everything has to be super efficient

so the what the knowledge graph allows you do years

say from this point

if i want to stay on topic

or keep with related topics

this is

the region of the set of things that i could go to and that we

have a content ranking