0:00:15however everybody
0:00:16uh to the are will speak about the segment level confidence measure or for spoken document retrieval
0:00:22this is a a trained of my presentation
0:00:25after a brief introduction of the motivation and this to do is
0:00:29i will speak about indexability
0:00:31to mission for documents
0:00:33and and then the prediction of this indexability a
0:00:37so then to speak about experiments results are in finale as a conclusion
0:00:43so is back is included in the
0:00:46spoken document what you're real task
0:00:48where are that automatic speech recognition system give a transcription
0:00:52and when you must were from relates
0:00:54the query such and trying to vote them to the user
0:00:58the documents in the ranking
0:01:01uh there were is
0:01:02speech recognition uh systems
0:01:05automatic speech recognition yeah well
0:01:07but pros and search your percent
0:01:09can it back says the accuracy of the subject right
0:01:14and uh spoken document retrieval trivial task
0:01:16oh use
0:01:18and the global performance
0:01:19of the system
0:01:21in this work
0:01:22but at all kids i do we okay is
0:01:25to check the stick if a document can they that the base or as indexing
0:01:31this is a look at in document performance intervals of
0:01:35spoken document what your row
0:01:37more precisely the automatic speech recognition system
0:01:43uh some good documents
0:01:45oh it when there is a tremendous
0:01:47and a one you know a as the user or from a query is
0:01:51such and doing can returns a it when there was documents in the first
0:01:58so but we have to introduce
0:01:59the method to kids
0:02:01i don't mean to the take this i when there was set documents
0:02:04and for example they can be
0:02:07but and i could used
0:02:09and we
0:02:10we introduce in the database
0:02:15so no i will present the indexability estimation for document
0:02:22and some first box
0:02:23and the left
0:02:24the document and file in blue is provided by is the automatic speech recognition system
0:02:30and i was of documents are manually transcribed
0:02:33in the rows of X
0:02:35and the right
0:02:37documents are manually transcribed include the document
0:02:40and uh
0:02:43what we formulate a uh some is the search and right
0:02:47we'll return to know
0:02:49the from drinking
0:02:51and we have a to run for as a document of uh
0:02:56finally we compute and C estimation for
0:03:01the document and file base the mean i've of the of you on the twenty best
0:03:07this is in to indexability
0:03:09it's to mission for the document
0:03:15i will present the production
0:03:17this indexability ability
0:03:21this is good of this well he's
0:03:23to pretty
0:03:24if but the command can they meet the that based on that
0:03:29the principle is based
0:03:30on the mix
0:03:31i have uh
0:03:32to kind of
0:03:33miserables rules
0:03:35the first is the correctness of the row
0:03:37names the confidence measure
0:03:39and the second a semantic modeling
0:03:43the world
0:03:45name it semantic compactness and X
0:03:48we use that really are
0:03:50you on the one network
0:03:52to combine the matrix
0:03:55and predicts indexability
0:03:57after a in the reserved section
0:03:59we really speak about the
0:04:02the results of their prediction
0:04:08there is some problem with the coral
0:04:10so as a first image matrix is a confidence measure
0:04:14which are expected from the automatic speech recognition system
0:04:17the as present the correctness of the world
0:04:21we use
0:04:21twenty tree
0:04:22features grouped into places
0:04:25acoustic linguistic and got classes
0:04:28and the confidence measure i've
0:04:31the documents
0:04:32is is the mean of the confidence
0:04:34but real of the meaningful for
0:04:37i have as a document
0:04:40we have a a true example for each class is
0:04:44in acoustic with then we can find uh the log likelihood
0:04:47of the room
0:04:48uh a in the linguistic the income probability in in the graph
0:04:53we have to do
0:04:54of the complete it's well
0:04:56which represents a number of
0:04:58at on that's you that's
0:04:59in the remote section
0:05:03zeros are matrix is
0:05:05the semantic compactness mean uh and the X
0:05:08in the state of the are
0:05:10in some cases
0:05:12so in sick and information then so
0:05:15and prove
0:05:15the confidence measure accuracy
0:05:18for automatic
0:05:21speech recognition system
0:05:23but we can these tunes that the insertion of substitution of meaning for worlds
0:05:27and backed
0:05:28is a spoken document retrieval system
0:05:35that was this made so that the uh really in this better we propose a local detection i've
0:05:41which layers
0:05:42but isn't sliding context window
0:05:44which represents
0:05:46a back or for
0:05:48a is on the large corpus
0:05:50use at the rate as reference
0:05:53we have a example of uh
0:05:56where as the for up to just to patients so and can a i P are only in the
0:06:02in the same uh
0:06:05context but
0:06:06the rubber rain
0:06:07never uh doesn't up here in the same context as
0:06:11zero the roll
0:06:12so this is and with value
0:06:20now i will speak about the experiments
0:06:22and the the reason you're
0:06:24and transcription are generated by using is the automatic speech recognition system of the L A a a name it
0:06:32it is based on the uh stop search of in
0:06:35you you
0:06:37a lexicon and i of uh
0:06:39sixty seven of and uh that was and the well
0:06:43the corpus
0:06:45yeah is the uh the as to the sets
0:06:47which contain approximatively really eight are else
0:06:51but just news
0:06:52and contain approximate proximity really um
0:06:56seven two hundred documents
0:06:57we have a maximum i
0:06:59so it's two seconds
0:07:01it's documents uh i have a
0:07:03approximate proximity between uh
0:07:05so and and uh
0:07:07at where
0:07:10the system but for a uh so that's a five percent error rates
0:07:15but a real time system
0:07:21is that such and train use is the send it is based on the the frequency can see and document
0:07:26frequency on agree
0:07:28the core with this set
0:07:30contain uh
0:07:32one hundred sixty thousand queries
0:07:35extracted from the that line of the newspaper
0:07:40the court
0:07:42there we from used is the we keep it to
0:07:45and uh corpus in query is
0:07:47oh a it's is it in filter read
0:07:49in order to keep the meaningful word
0:07:53which trains a neural network okay
0:07:55and a one i have a and this to
0:07:58the experiments
0:07:59and seven are all
0:08:04so i will present no to prediction
0:08:07yeah right
0:08:08we use
0:08:09to metric is the distortion
0:08:11between the production of indexability ability
0:08:13and the and X but
0:08:15and what mean square error
0:08:18as we can see that that but it we use uh
0:08:22i has
0:08:23prediction of indexability
0:08:25only use a confidence measure
0:08:27and uh
0:08:27the semantic compactness and X
0:08:31prediction of indexability ability
0:08:33and the mix
0:08:34the combination of the
0:08:36to metrics
0:08:38you can as a combination and yet as a better performance
0:08:41we have a we have a six been better
0:08:43for as a distortion and
0:08:46for a chip or some fourteen percent
0:08:49so what mean square
0:08:53now i represents and or experiments
0:08:56which i will uh
0:08:58and are composed
0:08:59to to into pulse
0:09:02the corpus
0:09:04you know but to keep
0:09:05in a uh you know running hand the and then takes about documents
0:09:08and is well as and zero and except document
0:09:13yeah for example a not covers
0:09:15well to select only is the uh
0:09:18so a and and so but the commands you can fix a transfer to such a percent
0:09:24and it's documents
0:09:26is classified as
0:09:29classify if are but that's five
0:09:32so um
0:09:34we we have a a good classification it was a
0:09:37and the but it's you and the prediction of indexability two
0:09:41i about
0:09:42and there or a pro
0:09:44i i a or that in this case the that the commands and red is but is if i
0:09:51now was this is the
0:09:55the classification right
0:09:57according to the indexability a show
0:10:02in impose a confidence measure or in your of the semantic compactness and X
0:10:06and in red the combination of the term is real use
0:10:10to predict indexability
0:10:14as you can still
0:10:15and are from to sense
0:10:17i matrix
0:10:19i will to classify
0:10:21correctly is uh the indexability ability
0:10:24we have a but i have a two percent of classification
0:10:27for the confidence measure
0:10:29at the to to find of two
0:10:33in the second part
0:10:35a well than of two percent
0:10:39i intrigued decrease
0:10:40and especially at eighty percent
0:10:44where as the confidence measure rule yeah fifty five or send
0:10:48of classification
0:10:50we the same transmit
0:10:52the confidence measure rules
0:10:54i don't to classify approximatively to and written documents
0:10:59models and the
0:11:00confidence measure only
0:11:03and a and uh in all cases as uh as a combination of the two metrics
0:11:10yeah as a better performance
0:11:20so in conclusion
0:11:22uh with the most rate interest
0:11:24of uh
0:11:25the semantic information and uh
0:11:28with the uh
0:11:30confidence measure or for spoken document retrieval
0:11:33we use a combination of the two metrics
0:11:36and the combination and uh
0:11:39i do to improve about so it's your percent
0:11:42the classification rates in terms of
0:11:44and except or and then takes about the command
0:11:47one with
0:11:49in does but
0:11:51we are planning to explore
0:11:54the uh
0:11:55let's and initially application for uh all the semantic modeling
0:12:00because it is but is that
0:12:02on the to pick a topic distribution on the power
0:12:06think you
0:12:12and you can have a few more minutes
0:12:15so question
0:12:20i i and one question on uh and like maybe thinking about a question
0:12:24so a real say my west each of their uh quite often a quite is use out to be a
0:12:30no change like only next no
0:12:33you you don't and
0:12:34and a are X i right like to like christ roughly that percentage of to quite so i just
0:12:40and is the same as sick the same there is that
0:12:43as an annual you'll transcription
0:12:47i i so i can just one make it a so i you had to get the transcription right to
0:12:52create it's nice as you are looking at an output and yeah i S i like a a case like
0:12:57to five parts
0:12:58so are what i started each of you a quite results in a change eighteen
0:13:03the results from S output is the transcription
0:13:19okay so basically a and like some of the years at you know a chance all spoken document retrieval achieve
0:13:25i no not come as that actually S i guess and i think to Q is not much
0:13:30so i out
0:13:33i don't think that that like they make a case yeah like make a twenty five or so
0:13:39so it is a task in
0:13:41so someone i'm it and that i just at that state for plus do you need to like to i
0:13:48at at no you only so that actually get the strain same are split as their your task
0:14:08normally if if you have the
0:14:11the many
0:14:14and we want to correct
0:14:16i can be used
0:14:17the power of the documents
0:14:20well can be corrected
0:14:22i really uh is it is there is a lot of
0:14:25would never appear in the top ranking
0:14:28have the the crew of the the search
0:14:31so that this kind of the command
0:14:34the attributes
0:14:35can select through remote
0:14:37of the database
0:14:38and no one hand
0:14:39and the are and so uh was a lot of documents
0:14:42we just the
0:14:43uh right
0:14:45we are there right
0:14:46is very are
0:14:47and needs to be manually approximate you
0:14:51approximate evenly we have a
0:14:53on the
0:14:56a per cent of a lower rate
0:14:59a the ten percent of a documents
0:15:01of the corpus
0:15:02which can be a remote by is that i can just
0:15:05because it just not not of and
0:15:09that's the just to uh
0:15:10we have a uh
0:15:12in "'cause" use of information a like a low this is
0:15:16no not the very important information
0:15:19and approximatively fifteen
0:15:21a sense
0:15:22to to be corrected
0:15:24so have
0:15:25a good the
0:15:27and except at uh
0:15:30vol group
0:15:32thanks to a close to i at question thank you
0:15:35a and the question
0:15:38it's thank speaker