well thank you for that kind introduction j are

you're right about luggage is then an issue would me and

but i'm close when i don't have my luggage as an even bigger issue

for me

so i

appreciate the introduction and i thank the organising committee for inviting me

and especially for naming this town i don't know joe once you ha i've never

had this happened before we went to give it is a presentation so please

so let me start by asking for show of hands

who among us has participated in a forensic style evaluation of speaker recognition technology

that's good

that's good i'm gonna try to get more hands up with interest that the and

my presentation

who is processed real forensic case data

well that's pretty good okay

so i'll be preaching of the choir some of you

and finally who has actually testified in court

that's good

very good okay

so let me

talk about some of the interesting not challenges in a forensic and investigatory speaker recognition

the basic introductory material for my talk is

you know basically to define the problem so in forensic in investigated a speaker comparison

the speech utterances are compared

and the process can either be by humans or machines

and

in the forensic case typically this is for used in a court of law

this is very high state

it demands the best that signs has to offer and those of you who pay

attention to trials on television probably are a pretty nauseated by what you see out

there and what is happening in the world

in terms of these expert witnesses that i'll be talking about later in the methods

they use

the map it's a vary quite widely and there is a very nice survey paper

by golden french the describes some of the variations in these processes

and that's not necessarily for the good

and

it's important that these methods that are used be grounded in scientific principles and be

applied properly

and just as important

is to decide when you should not except that case

when i it would be irresponsible

so this idea of went upon or not apply a the methods is also very

work

so we're gonna provide some analysis of the methods and a place to make citing

examples that i hope will get you excited about

how challenging this kind and domain really can be

and

one of the things i wanted you hear and in the broader sense it other

conferences with wide diversity

is to improve communications among

the research community this rate group here a legal scholars

you know we have for example in speech people like bill thompson

who

wrote the prosecutor's fallacy and was involved in the o j simpson trial so we've

got a number of very high profile a legal scholars in the us

involved and also international and of course the legal systems are different throughout the world

so you have to address these contexts these questions within context

and then finally

i'm going to ask this community for health and present some other things that you

could actually you get involved in and help us make progress

so i'll start by giving some background

cover some example approach is a talk about some of the activities

that are currently going on a request some

things for the community to get involved in some future ideas and conclude

okay so with forensics and investigation basically they differ by

primarily by whether the

methods

we will be presented in a court of law

a lot of people for investigation will try to use a similar process that has

the rigour necessary should it be important to pretty later presented in a court of

law

but the basic forensic community and investigative community

work and similar problems in terms of trying to establish facts

and the actual presentation form is where they differ now here i have a cartoon

that shows basic the most canonical example of a speaker comparison we have a known

a speech sample and a question speech sample

and you compare them

and there's some summary or analysis

the

forensic examiner or in less than mike right a reporter

and

we're not done that's

that's the simple view of the world

then i was happy when i asked the number of friends for suggestions

michael jensen from a p k a kindly provided this table from his summer school

that shows a little more granularity in terms of a forensic versus investigated

including a large scale

investigation where you might actually be running

automatic systems that are similar to i office the f b i z integrated automated

fingerprint identification system

which conducts large-scale searches through databases

and you can see here that they vary in terms of whether they will be

presented in court

what kind of

methods are used

a number of comparisons

and the type then

style of working on the date

so let me now give just a couple examples

of some forensic situations

first you might remember the olympics in nineteen ninety six with the centennial park but

there was a of thirteen second phone call

that said there is a bum

in centennial park

you have thirty minutes

that's it

so now i you've got this thirties thirteen second call the people are frantically trying

to figure out the address of where is centennial park at nine one

so that they can dispatch officers to the scene

basically a lot of time passes they have a short time to clear the park

by the time the officers get their two people are murdered a hundred and twenty

people are injured

and now they have a suspect in custody two matches the description of someone that

was seen it

payphone

and that person's name is richard jewel

and

they have quite a bit of her sin trying to establish if this person is

the one on their call

turns out the actual person who made the call

escaped the scene and was not caught for seven years

another a very high profile in recent case a tray of and martin

this was

had all sort of the wrong things happening all at once

extreme mismatches of every type imaginable

these outrageous claims of justified shootings and then

just to make it more interesting the orlando sentinel newspaper decides to go higher some

voice exports

and

i don't know if they quite appreciated the conditions under which they were working

first of all it's hardly consider speaker recognition when the person is a crying out

for help right

and i'll show you later some of the issues involved in that so this was

a very turbulent time in the us

and

a lot of controversy regarding the kind of data that was involved in this case

in how

i how inappropriate the whole situation is we have people by the way like george

doddington who's here today for keeping the system on the rails he was one of

the expert witnesses

so how heart is forensic speaker recognition

well

a first step in that direction that actually is not truly forensic speaker recognition

who was this nist a haze or evaluation and actually i miss the before the

nist hazy there was actually and evaluation by and if i to you know that

actually would real

forensic case data

i'll talk about that in one

but in the haze your evaluation

you know unlike conventional nist evaluations

you know where you have so many trials they're not really pride itself practical for

humans to process the data

here there was a paring down to make the number of trials manageable by humans

and the process for doing that was a two-stage selection process where

you would use an automatic system to find the most confusable pairs

and then a file that by using humans to then

find the most confusable pairs of the confusable automatic here's so you have a very

difficult data to work with and the benefit of that was now you can have

a you know evaluation

with the mere fifteen trials that's manageable by humans

in this what is the beginning for the nist

style of evaluations that are in this direction

so i don't know if you've heard the use but let me just play one

here

so here is a trial eleven

now play the two samples and the question

that is asked are these from the same source

here's the first one

yours the seconds

so

it's pretty impressive to me the

that's supposed to be as you can see by the truth label here

two people

i will i like i said in brno i would love to actually meet these

two people and see that there are two separate people have dinner with them

you know maybe it would be high price of for the meal but

those two people confused the humans and the automatic systems consistently in the first is

your evaluation

and it inspired a lot of people to

look in the this interesting problem

and i unlike the to traditional nist sre protocol he's are of course allows human

listening

so this is exciting

at the time of all the data was in english so that might somewhat limit

some of the human approaches

but it's shore gave a nice flavour of the challenge in this is difficult

but you know what it's not nearly as difficult as the real thing and i'll

play that the mom

so some

challenges in i speaker recognition for humans in machine i have a few slides

the nist of else have made progress in things like channel mismatch

distance to the microphone by progress i mean progress in evaluating the of these of

facts

also in terms of duration and cross language although not showing notes here

so that this is good but there's a lot more going on in a lot

of forensic case data

so the typically in these scenarios

the talkers are unfamiliar to the examiner

the talkers tend to be familiar with each other

and that affects their conversation-style there can be multiple talkers there's all sorts of different

styles a conversational read aloud crying speech for example if you wanna call it speech

and then accommodation when you have familiar talkers adapting to each other

if there's a conversation that's part of the evidence which is often the case they

might be deceptive

and i have examples of this and sometimes you dealing with people who are mentally

ill or medicated and they can be all these situational mismatches to deal with

this goes on and on

but you know what it's actually have a nation often used thing so if you

have an evaluation where you evaluated a few of these factors the problem is these

are combined in horrible ways to make it even more challenging

when you're trying to determine what is the performance

about system or a human or human with the system

so you can have mit mismatch galore between the samples that are being compared and

also all the information used to train our automatic systems the background hyper parameters it

goes on and on

then you have additional challenges in terms of how should this information be presented

in terms of scoring or decisions you know we will be pretty strong advocates in

general about say for example reporting log-likelihood ratios or something like that

but

a lot of the forensic people i work with

the investigators don't wanna hear a log-likelihood ratio they want to know what they should

go take action

this gets very bad in the number of ways mathematically ugly because of asserting prior

probabilities to make decisions this is a very hardened

a tenuous situation

in an area where this community is made some progress and i'm hoping route odyssey

all actually sees the more in this direction

then you have this whole issue of calibration with system scores moving around and drifting

if you well in this causes chaos among the analyst

so one of the biggest challenges in a lot of this is building a i

trust and confidence in the analyst or examiners if your system starts misbehaving i they

might start using it or do something kind of crazy

so there is a lot of issues with down establishing trust and having the system

be reliable and stable and calibrated

then you have the issue of the courts question so we talked about sort of

this canonical example with i got two speech samples

is the source the saying

well that's not necessarily you the question that the court hence

it's not make it but the other guys just been murdered

and we don't have any recordings of his voice

so now what you do you there is a whole bunch of

challenges with trying to figure out

you know how do you deal with the

you know a known in advance questions from the courts right now one of things

that i've been pursuing with some probably is to see what are those questions somewhat

negotiable

and can we get a pretty good menu of what the history of these kinds

of questions are to help us as developers build systems an acquired data to help

address the kinds of questions that are likely to come up

then you have this issue with the automatic systems where you know people might think

that they're fully automatic

but often what happens is there is models that have been bill

i human head is segmented speech

and

decided what speech utterances are assembled to create models

so you got this kind of chicken in the or the egg problem right so

i'm trying to recognize speakers but yet when i'm training my models i need to

do some segmentation

so there's that factor to keep in mind also

then there is a whole bunch of other things going on here

i've already talked about went upon

in terms of not accepting a case

you went upon is the expression from american football i'm not sure that translates internationally

and then there's some other issues about noise and degradation that are important to keep

in mind

and we'll talk more about those in the moment

so

now let's actually here some real case data

this is pretty fascinating i thing

i'm going to show some examples play some examples the first one

i'll set it up for you a triples triple homicide is just been committed

the suspect runs from the scene

with one of the victims cell phones and their blue two

and he's calling his for and

to come and pick came up

he's running is fast and see

the wind is blowing

and i it's a very difficult situation so let me play this

so that has

a lot of characteristics that you probably are used to working with in say the

nist evaluations

and the this is really challenging stuff and it gets better because

now we have the suspect in our custody in his jails

and he's kind of perverted to being like just in beaver

so listen to the

so that's pretty a mismatch when you say

i don't know what you would do with the data like that

so that's just one the one example of just incredible mismatch and always not only

between the samples themselves well maybe the last one isn't

terribly unlike a lot of that's training data that our systems are built with but

i'd be surprised if are systems have been trained and have their hyper a hyper

parameters and background models knowledgeable of the at the this like that first same

so this is

extreme mismatch not only between the samples but against are systems

but we play another example

of a very complex situation

where you have some pretty stressed overlapping talkers

how many talkers are there in that situation

sounded about like three to me but you know i i'm not sure

or

you know and apart i didn't plays the beginning where you've got

the operator at answering nine one and then you hear the person in whispering and

then putting the phone into their pocket

i where they found it later unfortunately

who is then the victim

so

this is the type situation some so this gets in of the questions like what

question am i trying to answer how many people work rats and

who said what

the area of disputed utterances as it is known in the forensic community

so these guys of course are you know rounded up and they're all claiming nodes

the other guy that shot am i was just visiting right and friends or so

so there's challenges like that you're with

another example

is

is a very interesting threat hall

and this one has some timeliness about it as well

so listen to this first recording

so the audio system in here is pretty good i don't know if you could

make that out but the guys basically giving the address of that's going to be

attacked by gunmen tomorrow

wow better decide what you're gonna do you

so they decide to bring in a suspect

and here's his interview

so i there's and number of things going on that first call it seems like

the person was like in the movies holding a handkerchief over the phone

sound like they had marbles in their mouth

the second one i don't know if there are medicated or what's going on there

but there is a lot of mismatch going on in that situation and you know

for investigative purposes even though you're not in a court of law

it still has high stakes when you go decide to take somebody in the custody

i mean that's a dramatic experience right so you still need to be cautious how

to proceed with that

but it's very difficult to make a quick decision in situations like this

and you know

this is just a small part of it

as reversed warts at the your secret service as if it's always something every case

there is a case where

somebody a had a sex change operation during the

first sample and the second same ball that we're being compared with

you know the so a lot of our systems that are gender dependent like what

you do you know that there is just

so many challenging situations

they come up when you're dealing with real a forensic case data and i should

add

the when samples get elevated to the level of the national resource like reba schwartz

those of the hardest of the forensic cases the easier ones can be handled it

a lower level

so these are very challenging situations

and one might ask what how do i figure out if

i if i should process this data

if it can be admitted in the core

if i'm in the united states

i have this

admissibility standard and the with the doppler

so for example

in us federal court and in about half of the us the words

the job which will consider the admissibility of scientific evidence

but judges are often the first to admit that generally they're not sign this

so they had this sort of d he role pushed onto them

and the idea is

under federal rules of evidence number seven no to the testimony by expert witnesses

the purpose is to assist the trier of fact the jog through the jurors

if the evidence is going to be very confusing

then it's not

it method

so that this is kind of loose

here the courts have in the us have tried to

structure this

and a

form this so called out we're test

this is a the over versus merrill dow pharmaceuticals

and basically four or five depending on how you read it different factors

are introduced in the this the outward test

so has the method bin or can it be test

well

one of the nice things about our communities that we do test a lot

not sure that we test on this kind of data

another is you know has been subjected to peer review and publication

well are communities very good at publishing papers and

this odyssey is just one of those excellent the forms

now we're in trouble

does it have a known error

wow well if you tell me what error rate you want i can find the

corpus that will probably give you that error rate that's not the answer they wanna

hear right they are they want something pretty solid much more certain like

for example the in a

which by the way also has variability

but that's a whole nother story but at least it's relatively small compared to what

we experience

in the voice world

are there existing standards controlling its use

and maintain

well currently there's very little in that area but in the us all be talking

in a moment about some activities in that direction

and

learning about what's happening internationally which is one reason implied to be here this workshop

and then of

the first one is sort of this friendly thing like you know is it generally

accepted by the scientific community

then you get in all these problems like what's a community what's the scientific community

and

this up there are part

is also known as the fried test which predated the arbour

test

so looking at

the basic anatomy of the speaker comparison system

you can form

two parallel branches

the start with the feature extraction and creating models and then go through a comparison

of the

hypothesis that the samples matched versus they don't

i and then a producer calibrated a match score out what

now that's

fine however

there's all these knowledge sources that are under the but

and all these areas that are right

for mismatch

so for example let's just take and i-vector system

so we have this signal processing chain

and

different stages here are shown where we need all these different kinds of background information

whether it's

hi hyper parameter tuning

you know the universal background models

i

total variability matrix for the

covariance matrix that's needed

to make these systems successful

but there's more

what about calibration

i need to train that system is well

and a system that's not calibrated will drive in one is absolutely crazy

and you lose their confidence and they'll stop using your system

so this is a very important stage it's great the nico heads the paper here

on

calibration and weights to address this again

one of nicholas favourite topics of mine too

so basically you want to try to minimize all these nuisance as a some of

which

if you're processing single here's of samples at a time you can get a good

handle on other nuisances are partly due on single pair comparisons

those have to deal with logical consistency with the to use two samples matching

and then another pair of samples matching but the others powder samples not match and

when i say matching i don't mean that in the binary sense i mean scoring

high

so

calibration is a good thing makes in was happy smile and when it works

thank you go when everybody that works on

so now what

whatever what why do you if i want to combine these methods

this gets also quite complicated

and you know do you do we way these processes in a dynamic fashion taking

into account when there are working in areas that they've been developed in trained on

and

d weighting them when there are running a little bit out of the regions that

they've been developed for

how do we mitigate the observation bias you know you certainly don't one day human

examiner to know what the scores are from the automatic system before they can finish

their evaluation

but it gets even more fine grained than that sometimes

you know you hear

content in the mid in the samples you're working on that can bias you

you might consider removing that content at the expense of working with less data

you've got all these variabilities to deal with the subjects of the samples themselves the

humans that are actually conducting the comparison process

all analysts are alike

for example then the machines that as well

there's issues about consistency in repeat ability

already mentioned logically consistent the desires and then

you know having some best practices to establish howdy

use these processes remember one of the doubt where criteria is the existence of standards

and their maintenance

to invoked these process

so it works only there's a number of evaluations that can help us and if

i t no i think in two thousand three had the very first one on

real forensic data

that was a lot of fun

and you know the agreement we require that you destroy the data after you didn't

unfortunately we divided by the agreement no longer have that the at the but

that was really very nice

but the good news is that

there might be more about coming

then we have the nist a teaser series which you know isn't quite forensic but

it's probing some dimensions that will help us make progress i think in the forensic

domain

and the next sre

might actually have real forensic samples and

so

are

you know i think it's important to look at all this in the context of

the delaware factor

and

i especially for application the united states

but maybe throughout the rest of the world as well it's it they seem like

pretty sound principles to me

but if there's additional factors that are used internationally i would love to know about

them to make sure that they're being is addressed at least in our work as

well

so some activities

there's the us we speaker the scientific working group on speaker recognition

here we have a history of starting this and

a lot of the

efforts were motivated by the two thousand nine a report from the national research council

national academy of sciences

and strengthening forensic science in the united states it basically called all of forensic science

on the car

and said what

a the practise that's used for d n a is a gold standard

the rest you guys should model it

they call then the question things like got carpet fibre analysis tool marks

things they just scientifically didn't quite have the background

in terms of their development

and that's partly because forensic science didn't grow up being developed by sign this

so one area that worked reinhardt

to address with the investigatory work

voice working group

actually is to make progress in different things like the different use cases and collection

standards

i or word already mentioned best practise are best practise when the pun

standard operating procedures there's this new type of eleven standard

the scientific working group has a number of ad hoc committees

i including in our det any committee which number of you would probably be interested

in

and the best practices can maybe

science and the law

and vocabulary to get kind of the whole community talking together

so best practices committee for example deals with the number of areas including collection audio

recordings

the related data that goes with an audio recording you know maybe you know about

the phone numbers that handsets used

a number things like that

some of those factors should be passed to the examiner others might cause bias you

have to be concerned about

then there's the transmission part of the standard known as the type eleven record your

probably be hearing a lot about that

and then the proper application

and also guidelines for examiners and reporting

so here for example is how you form a standard transaction in this type of

eleven a framework basically you create a transaction that has the known in questioned recording

and then you've got the two type eleven a records the go with that about

how to transmit

that data you have type two information about the situation of each of those recordings

and then you have this type to that has all the issue has all the

information about

the legal framework and justification and then an overall

type one to enact the transaction and you go through this a process where you

do something speaker recognition scoring reporting and then deliver the report back to the submitter

so this is just one of seventeen

ut types of transactions that are currently define in this effort i don't have time

to go over all of them

how do how does one actually a arrived at a best practise

you can

go through two branches survey the community as see what candidate best practices there are

at the other branches to look for gaps and develop new best practices

but in all cases these are going to go through a validation process the requires

evaluation

and then

finally when they been evaluated they will be proposed a i and except proposed as

an actual best practise and maybe a step further as a proposed standard this is

all and within the in seen yes i t l framework

sometimes you need multiple best practices especially in human based approach is because there's a

lot of variability bit among analysts and what they're different talents are

so if we had one standard this as a human recognition should be done by

structured listening

you will exclude eighty five ninety five percent of the laboratories mean i'd state

whenever you do evaluation you need to be very careful about the design collection of

data finding how do you keep this going

so there is some new efforts

that all talk about later with this sack

let me start with this simple request to the community

so if you have candidates for best practices please submit them to swig speaker and

the sack for consideration

pursued outer factors improve robustness

work with the analyst you never in there's nothing quite as i opening is working

with an analyst and understanding the challenges they're dealing with

and participate in forensic style evaluations

that's what we would really like to see

wrote the most serious

so here i just have a couple then slides i norm uninsured

and the idea here is i mentioned set

okay so the organisation of scientific area committees this is a new after

it's house the nist

swig speaker here will be absorbed in sack is there's speaker recognition subcommittee

i've already mentioned in this in seen a slightly l type eleven records

i has a great set of

documents and a journal and even the air code of conduct

that you might be very interested in

there's a lot of other organisations i basically had a list of

a quarter this line the mast some friends for help thank you everybody who sent

me things

now i have too many things to actually talk about all of them

so this highlight to here

and in fact

i mentioned the and a five folks are pursuing some new data that's in the

forensic domain i won't steal the thunder from their paper which is why trim a

conference

and there is some big efforts in

euro in the

f p seven as well for

b

multi integrate voice systems that are multi media multi

source system

okay so let me i conclude

speaker recognition is successful used today in a variety of applications

but must be applied responsibility with caution

and this is referencing the paper the chair finally mention that the beginning

we need to work more to address the factors in the forensic domain the

i degrade performance

real case data as you heard can be extremely challenging

in right now if somebody wanted to ask okay that first example with the triple

homicide what kind of error rate could i x that

in that situation that is one of the downward factors

nobody can answer that even close

there's many challenges to as

that are needed to address these questions

please contact me if you have any ideas and i think has he said it

best

someone is a very good finish way for a decision

so maybe we can talk more about this and this on a nine

thank you so much

when you think drawable is a very much so

where a little bit longer but what we us to have five or ten minutes

for questions so yes

wants to

begin

what for microphone is coming

i four recording

self recording for the mismatched especially the first equality play

is that the question of the intelligibility of the speech is even a human cannot

understand for example the first but you like you can understand what they say how

the machine can't it with like

so that the intensity of the speech is one part of

like for special for maybe locates the say okay

this problem for just a bit of speech is no one can ask expert or

t so we can expose from the beginning or something like that right is

is the issue addressed before so

the intelligibility issue is an interesting one because it comes up and one of the

very first courtroom the ask goes with the michigan state leaves

with some voice evidence

when the testimony from one of the police was that this per the

voice on that recording

can only be this person to the exclusion of all others and then the judge

played the recording

he couldn't understand

so then he's asking so how what makes you think

and quickly this was overturned

or ruled out

then stepping forward

as you saw with the structured listening

the first step there's to transcribe the speech in the words and then look for

these

very variation

i you're in trouble if you can't transcribe the speech in for that now

one thing that we need to be cautious a with the automatic systems

as long as they can detect speech which isn't always the case

they'll process the data and produce a score

well you shouldn't three like a black box

that score might be meaningless

so i don't really know how to directly address your question other than share those

observations but if you're working on that would be good no

okay

thank you

what else

which are

thanks for torture i'll well also adding speech and leon in france i attended the

forensic tutorial

and he said that when i have a tracks recording a and the core suspect

like in to them but like to rate

so that covers assigned phonetic pronunciations in the actual choice

can you just

i can i can clear

next we cringe but go a sorry i was kinda listening to your presentation about

the phonetic content we actually looking at the london fines right is that is that

occur something you follow similar type thing i you get the suspect to pronounce assigned

twelve fines

use of this gets down to

in one area the methods being use

so the very old

antiquated i method known is that spectrographic matching

actually requires at least twenty word like units

being spoken

that match what's in the evidence

so one way they would deal with this it's to give the person something to

really get loads twenty word like units

well as you can imagine read speech is disastrous if you're trying to study things

like dialectal variation

so

what's good for the all

spectrographic matching process is a disaster for modern

methods like structured listening which i should add are inspired by a lot of the

methods used in europe in germany by the be okay

so this is

those recordings that they could be talking about the old

style manner

just as a subsequent questioned then we're able to get some kind of speech recognition

into a speaker id systems

where there is some kind of phonetic alignment is not beneficial to the community

the forensic

well in fact some speaker recognition approaches

have a layer where they're actually doing speech recognition and phone recognition

and that a lot of that work was inspired by george doddington actually

i and idiolect

and sure whether it's in the recognition system itself for a by product of these

structured listening approach speech recognition becomes a very important process whether it's automatic it's a

different question

but if there's a lot of data to analyze the overwhelming analysed if they have

to manually i do you say phonetic transcription which was the approach being used for

quite awhile

that is this bad system i showed and that one slide helps to automate that

speed the efficiency in fact

but question

sit under a texture you mentioned in a is the sort of pitch more and

of course that's scary for us to what we're never gonna be as accurate as

they are that's i think that's problem in speaker recognition

but are we have valuable evidence to introduce it softer it sweeter evidence

the using the american legal system can understand the concept of weaker evidence and how

value valuable it can be an integer do you think a likelihood ratio

can be understood by four

okay so that is multiple ones

the first one

it is what the i national academy of sciences with calling for with the framework

like the in

they weren't although would be nice they were demanding that the performance be on par

with the end

but they let it be in a in the scientific background behind and very large

studies that have been done here all evidence it's a very nice

except by the way when you're dealing with uni mixtures but for the time being

just assume that you the any samples where there is a whole nother the of

dealing with some of the those channel so

in a is not perfect but it's extremely good

the next question about will jurors be able to deal with properly understand likelihood ratios

so bill perhaps and it is conducting a survey of the mock your

actually see when they're presented with

evidence in different forms whether it's likelihood ratios i a verbal description of what a

log-likelihood ratios for might mean to see how that's interpreted by jurors i don't know

he's publish that paper but it should be happening soon

and one thing that happened with dorothy going and see who is also involved in

this study is a hybrid cy x where she came up with a very scary

statistic in that was something like a quarter of jurors in the us

don't understand fraction

what are we gonna do

move to europe i don't know how well i don't know what the ratio is

in europe but wow that's this area so

but it's important that the general public vad

i don't know what but if i could commanders peace last question i'm not sure

it's useful to ask the question in fact i have the answer but don't and

pickle will not understand the likelihood ratio and we know all about because well for

and able to understand likelihood ratio and how mine

under the

reason to and so like but there's

you should still requesting for local overall system in all the countries to be expected

to be a witness to coming from papa coped

you know that we explain for people to you means but we still keep results

but it so like to one issue is not the non-focal is not

the lemon

so why we define orifice

that can break issue used only

according to me to give you pour to needy to view of a party

to

i bouquet do to a estimated quality of what we didn't up of science in

the ripple

i like the ratio is defined for some difficult people use one expert

in the park the report is using a global that if you meet all we'd

like to ratio and of or expert could

review baseball than the a firewall against to the middle

and the we are in some to pick language not in the cold language after

about the expert the younger people

you see his own opinion and taking his own risk

and this is not

like calibration at all

sorry i don't want to take that would a i would like to a location

to discuss just question the later maybe k varies

last question

so one

no you

george the

well likelihood ratios a wonderful thing

the primary issue with the likelihood ratio use the

happens to be the output of a system whose crazy

the likelihood ratio

if you actually know the likelihood ratio

perfectly wonderful to use

but the likelihood ratio audible supposed to most portion

let's works

maybe what you were just getting at is that we need to keep in mind

we're always estimating likelihood ratios and it's just another

i area cost of mismatch

you know our systems are producing these estimates

and

using data that probably doesn't

look anything like that first real case i

so what you

i don't

i have to closed position a unfortunately i and i want to thank you

by your jewelry okay