On Fri, 24 Feb 2012 10:35:15 +0000 (UTC), JohnF <email@example.com> wrote:
>Jennifer Murphy <JenMurphy@jm.invalid> wrote: >> Rich Ulrich <firstname.lastname@example.org> wrote: >> >>> What are you trying to do? >> >> I am trying to calculate for each word the relative likeliness that it >> would be encountered by an average well-educated person in their daily >> activities: reading the paper, listening to the news, attending classes, >> talking to other people, reading books, etc. >> >> The raw scores that I have already do that, but I question the >> weighting.I do not think that the average person encounters the types of >> words typically found in academic journals at the same frequency as they >> would those found in newspapers or magazines. Therefore, I want to >> re-weight the five sources to reflect a more average experience. > >Don't weight the sources, weight the people. >That is, define a person by a "state vector" > p = <w_A,w_B,...,w_E> >representing his inclination/weight to read each >kind of source. You're now kind of using p=<.2,.2,.2,.2,.2>. >Is that really "average"? Or maybe you can't define >a single average person. College-educated will probably have >a different vector than high-school dropouts. > So you ultimately have a five-dimensional (that is, >#sources-dimensional) people space, with each point in that >space having its own "likelihood distribution" for coming >across your words. ... Or something like that. The basic >point, again, being to weight the people.
This thread is very stale, so you probably won't read this, but who knows?
The core vocabulary in academic writing is actually very small compared to the others, about 3,000 words. The broad sub-categories of academic writing each have a technical core of about a 1,000 words (also pretty common). However, each paper includes 3-5 specialized, idiosyncratic words, usually familiar to the small group of people interested in the paper, but not in wide use otherwise. Weighting the people as you suggest preserves the worst aspect of the data and perversely implies that the simple, core vocabulary is less likely to be encountered than it is.
The OP is closer to the right track. As a practical matter, the likelihood of encountering the idiosyncratic words at random is close to zero. It would be more robust to extract the general and subcategory cores and re-weight the rest.