Ecstasy vs. Horseriding

Which is more dangerous, taking ecstasy or riding a horse?

This is the question that got Professor David Nutt, a British psychiatrist, into a spot of political bother. Nutt is the Editor of the academic Journal of Psychopharmacology. He recently published a brief and provocative editorial called "Equasy".

Equasy is a fun read with a serious message. (It's open access so you can read the whole thing - I recommend it.) Nutt points out that the way in which we think about the harms of illegal drugs, such as ecstasy, is unlike the way in which we think about other dangerous things such as horseriding - or "equasy" as he dubs it:

The drug debate takes place without reference to other causes of harm in society, which tends to give drugs a different, more worrying, status. In this article, I share experience of another harmful addiction I have called equasy...
He goes on to describe some of the injuries, including brain damage, that you can get from falling off horses. After arguing that horseriding is in some ways comparable to ecstasy in terms of its dangerousness he concludes:
Perhaps this illustrates the need to offer a new approach to considering what underlies society’s tolerance of potentially harmful activities and how this evolves over time (e.g. fox hunting, cigarette smoking). A debate on the wider issues of how harms are tolerated by society and policy makers can only help to generate a broad based and therefore more relevant harm assessment process that could cut through the current ill-informed debate about the drug harms? The use of rational evidence for the assessment of the harms of drugs will be one step forward to the development of a credible drugs strategy.
Or, in other words, we need to ask why we are more concerned about the harms of illicit drugs than we are the harms of, say, sports. No-one ever suggests that the existence of sporting injuries means that we ought to ban sports. Ecstasy is certainly not completely safe. People do die from taking it and it may cause other more subtle harms. But people die and get hurt by falling off horses. Even if it turns out that on an hour-by-hour basis, you're more likely to die riding a horse than dancing on ecstasy (quite possible), no-one would think to ban riding and legalize E. But why not?
This attitude raises the critical question of why society tolerates –indeed encourages – certain forms of potentially harmful behaviour but not others, such as drug use.
Which is an extremely good question. It remains a good question even if it turns out that horse-riding is much safer than ecstasy. These are just the two examples that Nutt happened to pick, presumably because it allowed him to make that cheeky pun. Comparing the harms of such different activities is fraught with pitfalls anyway - are we talking about the harms of pure MDMA, or street ecstasy? Do we include people injured by horses indirectly (e.g. due to road accidents?)

Yet the whole point is that no-one even tries to do this. The dangerousness of drugs is treated as quite different to the dangerousness of sports and other such activies. The media indeed seem to have a particular interest in the harms of ecstasy - at least according to a paper cited by Nutt, Forsyth (2001), which claims that deaths from ecstasy in Scotland were much more likely to get newspaper coverage than deaths from paracetemol, Valium, and even other illegal drugs. It's not clear why this is. Indeed, when you make the point explicitly, as Nutt did, it looks rather silly. Why shouldn't we treat taking ecstasy as a recreational activity like horse-riding? That's something to think about.

Professor Nutt is well known in psychopharmacology circles both for his scientific contributions and for his outspoken views. These cover drug policy as well as other aspects of psychiatry - for one thing, he's strongly pro-antidepressants (see another provocative editorial of his here.)

As recently-appointed Chairman of the Advisory Council on the Misuse of Drugs - "an independent expert body that advises government on drug related issues in the UK" - Nutt might be thought to have some degree of influence. (He wrote the article before he became chairman). Sadly not, it appears, for as soon as the Government realized what he'd written he got a dressing down from British Home Secretary Jacqui Smith - Ooo-er:
For me that makes light of a serious problem, trivialises the dangers of drugs, shows insensitivity to the families of victims of ecstasy and sends the wrong message to young people about the dangers of drugs.
I'm not sure how many "young people" or parents of ecstasy victims read the Journal of Psychopharmacology, but I can't see how anyone could be offended by the Equasy article. Except perhaps people who enjoy hunting foxes while riding horses (Nutt compares this to drug-fuelled violence). Nutt's editorial was intended to point out that discussion over drugs is often irrational, and to call for a serious, evidence-based debate. It is not really about ecstasy, or horses, but about the way in which we conceptualize drugs and their harms. Clearly, that's just a step too far.

[BPSDB]

ResearchBlogging.orgD. Nutt (2008). Equasy -- An overlooked addiction with implications for the current debate on drug harms Journal of Psychopharmacology, 23 (1), 3-5 DOI: 10.1177/0269881108099672

"Voodoo Correlations" in fMRI - Whose voodoo?

It's the paper that needs little introduction - Ed Vul et. al.'s "Voodoo Correlations in Social Neuroscience". If you haven't already heard about it, read the Neurocritic's summary here or the summary at BPS research digest here. Ed Vul's personal page has some interesting further information here. (Probably the most extensive discussion so far, with a very comprehensive collection of links, is here.)

Few neuroscience papers have been discussed so widely, so quickly, as this one. (Nature, New Scientist, Newsweek, Scientific American have all covered it.) Sadly, both new and old media commentators seem to have been more willing to talk about the implications of the controversy than to explain exactly what is going on. This post is a modest attempt to, first and foremost, explain the issues, and then to evaluate some of the strengths and limitations of Vul et al's paper.

[Full disclosure: I'm an academic neuroscientist who uses fMRI, but I've never performed any of the kind of correlational analyses discussed below. I have no association with Vul et al., nor - to my knowledge - with any of the authors of any of the papers in the firing line. ]

1. Vul et al.'s central argument. Note that this is not their only argument.

The essence of the main argument is quite simple: if you take a set of numbers, then pick out some of the highest ones, and then take the average of the numbers you picked, the average will tend to be high. This should be no surprise, because you specifically picked out the high numbers. However, if for some reason you forgot or overlooked the fact that you had picked out the high numbers, you might think that your high average was an interesting discovery. This would be an error. We can call it the "non-independence error", as Vul et al. do.

Vul et al. argue that roughly half of the published scientific papers in a certain field of neuroscience include results which fall prey to this error. The papers in question are those which attempt to correlate activity in certain parts of the brain (measured using fMRI) against behavioural or self-report measures of "social" traits - essentially, personality. Vul et al. call this "social neuroscience", but it's important to note that it's only a small part of that field.

Suppose, for example, that the magnitude of the neural activation in the amygdala caused by seeing a frightening picture was positively correlated with the personality trait of neuroticism - tending to be anxious and worried about things. The more of a worrier a person is, the bigger their amygdala response to the scary image. (I made this example up, but it's plausible.)

The correlation coefficient, r, is a measure of how strong the relationship is. A coefficient of 1.0 indicates a perfect linear correlation. A coefficient of 0.4 would mean that the link was a lot weaker, although still fairly strong. A coefficient of 0 indicates no correlation at all. This image from Wikipedia shows what linear correlations of different strengths "look like".

Vul's argument is that many of the correlation coefficients appearing in social neuroscience papers are higher than they ought to be, because they fall prey to the non-independence error discussed above. Many reported correlations were in the range of r=0.7-0.9, which they describe as being implausibly high.

They say that the problem arises when researchers search across the whole brain for any parts where the correlation between activity and some personality measure is statistically significant - that is to say, where it is high - and then work out the average correlation coefficient in only those parts. The reported correlation coefficient will tend to be a high number, because they specifically picked out the high numbers (since only high numbers are likely to be statistically significantly different from zero.)

Suppose that you divided the amygdala into 100 small parts (voxels) and separately worked out the linear correlation between activity and neuroticism for each voxel. Suppose that you then selected those voxels in which the correlation was greater than (say) 0.8, and work out the average: (say) 0.86. This does not mean that activity across the amygdala as a whole is correlated with neuroticism with r=0.86. The "full" amygdala-neuroticism correlation must be less than this. (Clarification 5.2.09: Since there is random noise in any set of data, it is likely that some of those correlations which reached statistical significance were those which were very high by chance. This does not mean that there weren't any genuinely correlated voxels. However, it means that the average of the correlated voxels is not a measure of the average of the genuinely correlated voxels. This is a case of regression to the mean.)

Vul et. al. say that out of 52 social neuroscience fMRI papers they considered, 28 (54%) fell prey to this problem. They determined this by writing to the authors of the papers and asking them to answer some multiple-choice questions about their statistical methodology.This chart shows the reported correlation coefficients in the papers which seemed to suffer from the problem (in red) vs. those which didn't (in green); unsurprisingly, the ones which do tended to give higher coefficients. (Each square is one paper.)
That's it. It's quite simple. but....there is a very important question remaining. We've said that non-independent analysis leads to "inflated" or "too high" correlations, but too high compared to what? Well, the "inflated" correlation value reported by a non-independent analysis is entirely accurate - in that it's not just made up - but it only refers to a small and probably unrepresentative collection of voxels. It only becomes wrong if you think that this correlation is representative of the whole amygdala (say).

So you might decide that the "true" correlation might be the mean correlation over all of the voxels in the amygdala. But that's only one option. There are others. It would be equally valid to take the average correlation over the whole amygdalo-hippocampal complex (a larger region). Or the whole temporal cortex. That would be silly, but not an error - so long as you make it clear what your correlation refers to, any correlation figure is valid. If you say "The voxel in the amygdala with the greatest correlation with neuroticism in this data-set had an r=0.99", that would be fine, because readers will realize that this r=0.99 figure was probably an outlier. However, if you say, or imply, that "The amygdala was correlated with neuroticism r=0.99" based on the same data, you're making an error.

My diagram (if you can call it that...) to the left illustrates this point. The ovals represent the brain. The colour of each point in the brain represents the degree of linear correlation between some particular fMRI signal in that spot, and some measure of personality.

Oval 1 represents a brain in which no area is really correlated with personality. So most of the brain is gray, meaning very low correlation. But a few spots are moderately correlated just by chance, so they show up as yellow.

Oval 2 represents a brain in which a large blob of the brain (the "amygdala" let's call it) is really correlated quite well i.e. yellow. However, some points within this blob are, just by chance, even more correlated, shown in red.

Now, if you took the average correlation over the whole of the "amygdala", it would be moderate (yellow) - i.e. picture 2a. However, suppose that instead, you picked out those parts of the brain where the correlation was so high that it could not have occurred by chance (statistically significant).

We've seen that yellow spots often occur by chance even without any real correlation, but red ones don't - it's just too unlikely. So you pick out the red spots. If you average those, the average is obviously going to be very high (red). i.e. picture 2b. But if you then noticed that all of the red spots were in the amygdala, and said that the correlation in the amygdala was extremely high, you'd be making (one form of) the non-independence error.

Some people have taken issue with Vul's argument, saying that it's perfectly valid to search for voxels significantly correlated with a behaviour, and then to report on the strength of that correlation. See for example this anonymous commentator:

many papers conducted a whole brain correlation of activation with some behavioral/personality measure. Then they simply reported the magnitude of the correlation or extracted the data for visualization in a scatterplot. That is clearly NOT a second inferential step, it is simply a descriptive step at that point to help visualize the correlation that was ALREADY determined to be significant.
The academic responses to Vul make the same point (but less snappily).

The truth is that while there is technically nothing wrong with doing this, it could easily be misleading in practice. Searching for voxels in the brain where activation is significantly correlated with something is perfectly valid, of course. But the magnitude of the correlation in these voxels will be high by definition. These voxels are not representative because they have been selected for high correlation. In particular, even if these voxels all happen to be located within, say, the amygdala, they are not representative of the average correlation in the amygdala.

A related question is whether this is a "one-step" or a "two-step" analysis. Some have objected t that Vul implies it is a two-step analysis in which the second step is "wrong", whereas in fact it's just a one-step analysis. That's a purely semantic issue. There is only one statistical inference step (searching for significantly correlated voxels). But to then calculate and report the average correlation in those voxels is a second, descriptive step. The second step is not strictly wrong but it could be misleading, not because it introduces a new, flawed analysis, but because it would be a misinterpretation of the results of the first step.

2. Vul et al.'s secondary argument The argument set out above is not the only argument in the Vul et. al. paper. There's an entirely separate one introduced on Page 18 (Section F.)

The central argument is limited in scope. If valid it means that some papers, those which used non-independent methods to compute correlations, reported inappropriately high correlation coefficients. But it does not even claim that the true correlation coefficients were zero, or that the correlated parts of the brain were in the wrong places. If one picks out those voxels in the brain which are significantly correlated with a certain measure, it may be wrong to then compute the average correlation, but the fact that the correlation is significantly greater than zero remains. Indeed, the whole argument rests upon the fact that they are!

but...this all assumes that the calculation of statistical significance was done correctly. Such calculations can get very complex when it comes to fMRI data. It can be difficult to correct for the multiple comparisons problem. Vul et al. point out that some of the papers in question (they only cite one, but say that the same also applies to an unspecified number of others), the calculation of significance seems to have been done wrong. They trace the mistake to a table printed in a paper published in 1995. They accuse some people of having misunderstood this table, leading to completely wrong significance calculations.
The per-voxel false detection probabilities described by E. et al (and others) seem to come from Forman et al.’s Table 2C. Values in Forman et al’s table report the probability of false alarms that cluster within a single 2D slice (a single 128x128 voxel slice, smoothed with a FWHM of 0.6*voxel size). However, the statistics of clusters in 2D (a slice) are very different from those of a 3D volume: there are many more opportunity for spatially clustering false alarm voxels in the 3D case, as compared to the 2D case. Moreover, the smoothing parameter used in the papers in question was much larger than 0.6*voxel size assumed by Forman in Table 2C (in E. et al., this was >2*voxel size). The smoothing, too, increases the chances of false alarms appearing in larger spatial clusters.
If this is true, then it's a knock-down point. Any results based upon such a flawed significance calculation would be junk, plain and simple. You'd need to read the papers concerned in detail to judge whether it was, in fact, accurate. But this is a completely separate point to Vul et al.'s primary non-independence argument. The primary argument concerns a statistical phenomenon; this secondary argument accuses some people of simply failing to read a paper. The primary argument suggests that some reported correlation coefficients are too high, but only this second argument suggests that some correlation coefficients may in fact be zero. And Vul et al. do not say how many papers they think suffer from this serious flaw.

These two arguments seem to have gotten mixed up in the minds of many people. Responses to the Vul et al. paper have seized upon the secondary accusation that some correlations are completely spurious. The word "voodoo" in the title can't have helped. But this misses the point of Vul et al.'s central argument, which is entirely separate, and seems almost indisputable so far as it goes.

3. Some Points to Note
  • Just to reiterate, there are two arguments about brain-behaviour correlations in Vul et al. The main one - the one everyone's excited about - purports to show that 54% of the reported correlations in social neuroscience are weaker than they have been claimed, but cannot be taken to mean that they are zero. The second one claims that some correlations are entirely spurious because they were based on a very serious error stemming from misreading a paper. But at present only one paper has been named as a victim of this error.
  • The non-independence error argument is easy to understand and isn't really about statistics at all. If you've read this far, you should understand it as well as I do. There are no "intricacies". (The secondary argument, about multiple-comparison testing in fMRI, is a lot trickier however.)
  • How much the non-independence error inflates correlation sizes is difficult to determine, and it will vary in every different case. Amongst many other things the degree of inflation will depend upon two factors: the strictness of the statistical threshold used to pick the voxels (a stricter threshold = higher correlations picked); and the number of voxels picked (if you pick 99% of the voxels in the amygdala, then that's nearly as good as averaging over the whole thing; if you pick the one best voxel, then you could inflate the correlation enormously.) Note, however, that many of the papers that avoided the error still reported pretty strong correlations.
  • It's easy to work out brain activity-behaviour correlations while avoiding the non-independence problem. Half of the papers Vul et al. considered in fact did this (the "green" papers). One simply needs to select the voxels in which to calculate the average correlation based on some criteria other than the correlation itself. One could, for example, use an anatomy textbook to select those voxels making up the amygdala. Or, one could select those voxels which are strongly activated by seeing a scary picture. Many of the "green" papers which did this still reported strong correlations (r=0.6 or above).
  • Vul et al.'s criticisms apply only to reports of linear correlations between regional fMRI activity and some behavioural or personality measure. Most fMRI studies do not try to do this. In fact, many do not include any behavioural or personality measures at all. At the moment, fMRI researchers are generally seeking to find areas of the brain which are activated during experience of a certain emotion, performance of a cognitive process, etc. Such papers escape entirely unscathed.
  • Conversely, although Vul et al. looked at papers from social neuroscience, any paper reporting on brain activity-behaviour linear correlations could suffer from the non-independence problem. The fact that the authors happened to have chosen to focus on social neuroscience is irrelevant.
  • Indeed, Vul & Kerwisher have also recently written an excellent book chapter discussing the non-independence problem in a more general sense. Read it and you'll understand the "voodoo" better.
  • Therefore, "social neuroscience" is not under attack (in this paper.) To anyone who's read & understood the paper, this will be quite obvious.
4. Remarks: On the Art of Voodoo Criticism Vul et al. is a sound warning about a technical problem that can arise with a certain class of fMRI analyzes. The central point, although simple, is not obvious - no-one has noticed it before, after all - and we should be very grateful to have it pointed out. I can see no sound defense against the central argument: the correlations reported on the "red list" papers are probably misleadingly high, although we do not know by how much. (The only valid defense would be to say that your paper did not, in fact, use a non-independent analysis.)

Some have criticized Vul et. al. for their combative or sensationalist tone. It's true that they could have written the paper very differently. They could have used a conservative academic style and called it "Activity-behaviour correlations in functional neuroimaging: a methodological note". But no-one would have read it. Calling their paper "Voodoo correlations" was a very smart move - although there is no real justification for this, it brilliantly served to attract attention. And attention is what papers like this deserve.

But this paper is not an attack on fMRI as a whole, or social neuroscience as a whole, or even the calculation of brain-behaviour correlations as a whole. Those who treat it as such are the real voodoo practitioners in the old-fashioned sense: they see Vul sticking pins into a small part of neuroscience, and believe that this will do harm to the whole of it. This means you, Sharon Begley of Newsweek : "The upcoming paper, which rips apart an entire field: the use of brain imaging in social neuroscience...". This means you, anyone who read about this paper and thought "I knew it". No, you didn't, you may have thought that there was something wrong with all of these social neuroscience fMRI papers, but unless you are Ed Vul, you didn't know what it was.

There's certainly much wrong with contemporary cognitive neuroscience and fMRI. Conceptual, mathematical, and technical problems plague the field, just a few of which have been covered previously on Neuroskeptic and on other blogs as well as in a few papers (although surprisingly few). In all honesty, a few inflated correlations ranks low on the list of the problems with the field. Vul's is a fine paper. But its scope is limited. As always, be skeptical of the skeptics.

ResearchBlogging.orgEdward Vul, Christine Harris, Piotr Winkielman, Harold Pashler (2008). Voodoo Correlations in Social Neuroscience Perspectives on Psychological Science

Lies, Libel and Love Detection

Via Mind Hacks, we learn about the case of Francisco Lacerda, a University of Stockholm academic who's been threatened with legal action by the sinister-sounding Nemesysco company. Nemesysco sell software which, they claim, can detect deception and emotions by analyzing the sound of people's voices - lie detection, in other words. (In fact it turns out that it can also be used to detect love, or at least, so they say - see below...)

The legal dispute surrounds a 2007 paper authored by Lacerda and Anders Erikkson, entitled Charlatanry in Forensic Speech Science: A Problem to be Taken Seriously. It was originally published in The International Journal of Speech, Language and the Law, but was taken down from the journal's website following Nemesysco's threats. However, the full text is still available on scribd.

To be fair to Nemesysco, you can see why they took offence. The paper is unusually lively for an academic article. Here are some of the best bits

Contrary to the claims of sophistication...the LVA [Nemesysco's "Layered Voice Analysis" system] is a very simple program written in Visual Basic. The entire program code, published in the patent documents, comprises no more than 500 lines of code... there is really nothing in the program that requires any mathematical insights beyond very basic secondary school mathematics... we initially intended to use the code published in the patent documents to make a running copy of the program, but the code is rather messy and not particularly well structured and we decided it would not be worth the time and effort to clean up the code in order to convert it into a running program.
In fact, in parts the thing reads more like a blog post or an op-ed than a scientific paper - no bad thing, of course. Even Lacerda admits that "The article had a journalistic tone and was rather provocatively written. We wanted to prove that the technology behind the lie detector is a scam." It's also not entirely clear why Nemesysco, who claim no specific scientific credentials, are a fit subject for an academic journal. (Other voice analysis companies who mis-read scientific papers in support of their claims seem a more obvious target.)

Still,
Erikkson and Lacerda make an excellent case against Nemesysco. They point out that, according to the patent documents, Nemesysco's "LVA" system does nothing more than apply a simplistic analysis to the amplitude waveform of the speech, involving counting the number of "thorns" (sharp peaks or throughs) and "plateaus" (flat bits):

As they point out, the number of these things will depend upon, amongst other factors, the quality of the audio recording and digitizating process: a better sound recording with a higher sampling rate (more "dots" on the graph above) will inevitably have more thorns and plateaus
The number of thorns and plateaus...depends crucially on the sampling rate, amplitude resolution, and the threshold values defined in the program
Even setting aside these issues, the fundamental point is that there is absolutely no reason to think that the number of thorns and plateaus in the speech waveform has any relation to whether someone is lying, under emotional stress, or whatever. This makes the LVA system even less plausible than the older "Voice Stress Analysis" (VSA) method of vocal lie detection, which Erikkson and Lacerda also discuss. There is at least some theoretical basis in physiology for that system, although a very very shaky one. LVA doesn't even have that - or at least none has been provided - so when Nemesysco claim that

The SENSE technology can detect the following emotional and cognitive states:

Excitement Level: Each of us becomes excited (or depressed) from time to time. SENSE compares the presence of the Micro-High-frequencies of each sample to the basic profile to measure the excitement level in each vocal segment.

Confusion Level: Is your subject sure about what he or she is saying? SENSE technology measures and compares the tiny delays in your subject's voice to assess how certain he or she is.

Stress Level: Stress is physiologically defined as the body's reaction to a threat, either by fighting the threat, or by fleeing. However, during a spoken conversation neither option may be available. The conflict caused by this dissonance affects the micro-low-frequencies in the voice during speech.

Thinking Level: How much is your subject trying to find answers? Might he or she be "inventing" stories?

S.O.S: (Say Or Stop) - Is your subject hesitating to tell you something?

Concentration Level: Extreme concentration might indicate deception.

Anticipation Level: Is your subject anticipating your responses according to what he or she is telling you?

Embarrassment Level: Is your subject feeling comfortable, or does he feel some level of embarrassment regarding what he or she is saying?

Arousal Level: What triggers arousal in the subject? Is he or she interested in you? Aroused by certain visuals? This new detection can be used both for personal use for issues of romance, or professionally for therapy relating to sex-offenders.

Deep Emotions: What long-standing emotions does your subject experience? Is he or she "excited" or "uncertain" in general?

SENSE's "Deep" Technology: Is your subject thinking about a single topic when speaking, or are there several layers (i.e., background issues, something that may be bothering him or her, planning, etc.) SENSE technology can detect brain activity operating at a pre-conscious level.

and yet nowhere on their website is there any hint of evidence for any of this, skepticism is justified. Amongst many other things, it's unlikely that even if we each have a vocal pattern associated with, say, arousal, (not implausible), the same pattern would be present in the voice of men, women, people of different ages, and so forth. People just aren't that alike, as any psychologist or neuroscientist knows. Even direct measures of brain activity during very simple cognitive tasks vary greatly between individuals. The chance that any kind of analysis of the voice could reveal such complex information about an individual without their compliance is remote.

Almost certainly, Nemesysco's analysis provides no useful information about the speaker as such, but as Erikkson and Lacerda suggest, it probably "works" through two psychological mechanisms. Firstly, the fact that if someone believes that their voice is being analyzed, they may tend to be more truthful because they think that lies will be detected. Secondly, the fact that the voice analysis user is able to interpret the output - e.g. "speaker stressed, concentrating hard" - in terms of what they already know about the speaker. Anyone might be stressed and concentrating hard during almost any conversation, so it always "fits".

Still, if you don't believe me, and you want to try out LVA for yourself, you can - and you don't have to be a cop or a spy. Nemesysco are now marketing their technology directly to consumers in the form of the Love Detector. The Love Detector is available as a Skype plug-in for just $29, and it allows you to know whether the object of your affections feels the same way about you, all from the sound of their voice.
Love Detector was originally designed with young singles in mind, or anyone searching for "the ONE". If you are currently looking for love, starting to date someone, or just have that unmistakable feeling, and you want to make sure it's mutual, Love Detector is the tool for you. If you are in a long-term relationship or even married, this version of Love Detector offers a "Relationship Selector" option designed to meet your needs as well.
There is even, apparantly, a free online version. If the mood strikes, maybe I'll try it out. Watch this space. And lock up your daughters (or at least unplug their microphones...)

[BPSDB]

ResearchBlogging.orgAnders Eriksson, Francisco Lacerda (2008). Charlatanry in forensic speech science: A problem to be taken seriously International Journal of Speech Language and the Law, 14 (2) DOI: 10.1558/ijsll.2007.14.2.169

 
powered by Blogger