Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Please critique my scheme for re-weighting source data
Replies: 7   Last Post: May 27, 2012 11:57 AM

 Messages: [ Previous | Next ]
 JohnF Posts: 219 Registered: 5/27/08
Re: Please critique my scheme for re-weighting source data
Posted: Feb 24, 2012 5:35 AM

Jennifer Murphy <JenMurphy@jm.invalid> wrote:
> Rich Ulrich <rich.ulrich@comcast.net> wrote:
>

>> What are you trying to do?
>
> I am trying to calculate for each word the relative likeliness that it
> would be encountered by an average well-educated person in their daily
> activities: reading the paper, listening to the news, attending classes,
> talking to other people, reading books, etc.
>
> The raw scores that I have already do that, but I question the
> weighting.I do not think that the average person encounters the types of
> words typically found in academic journals at the same frequency as they
> would those found in newspapers or magazines. Therefore, I want to
> re-weight the five sources to reflect a more average experience.

Don't weight the sources, weight the people.
That is, define a person by a "state vector"
p = <w_A,w_B,...,w_E>
representing his inclination/weight to read each
kind of source. You're now kind of using p=<.2,.2,.2,.2,.2>.
Is that really "average"? Or maybe you can't define
a single average person. College-educated will probably have
a different vector than high-school dropouts.
So you ultimately have a five-dimensional (that is,
#sources-dimensional) people space, with each point in that
space having its own "likelihood distribution" for coming
across your words. ... Or something like that. The basic
point, again, being to weight the people.

--
John Forkosh ( mailto: j@f.com where j=john and f=forkosh )

Date Subject Author
2/23/12 Jennifer Murphy
2/23/12 Richard Ulrich
2/23/12 Jennifer Murphy