Has anyone any knowledge of whether the Richard T LaPiere Poll experiement has been updated since 1930?

In 1930,  Richard T. LaPiere deomonstrated that, at least in some areas of life, people did not always DO what they said they did,  nor did they always "believe" what they said they did.  He took some friends of his on a trip across the USA;  they were of Chinese decent, but spoke unbroken English.  They were turned away from only one establishment out of hundreds.  Detailed records were kept of everything that happened on the trip.  Once home, he "polled" these same hotels and found that, except for a VERY few (one to be exact and one abstained) all of them said they would NOT harbor someone of Chinese decent in their hotel.

To make sure the trip itself had not influenced the owners,  he polled places he had not gone and got the same results.   No one would say they would accept them in their establishment.  Remember, this was 1930.  

This throws out an interesting dynamic.  If people generally do not do what they claim they will do (for whatever reasons.....) then polls and questionaires are not really worth very much.

Anyone know of any updated studies on this subject?

Picture of Has anyone any knowledge of whether the Richard T LaPiere Poll experiement has been updated since 1930?
sort by: active | newest | oldest
1-10 of 25Next »
craftyv6 years ago
Great question. I have worked in research (polling) and can tell you that the research companies only collect the data and state, "at the time of polling this is what people said, in statistical form ie. x% said this and y% said the other and z % didn't know. The reports are usually prepared by the (client) and there can be a big difference between what is reported and what is left out. Remember a 4-5% allowance (plus-or-minus) is made for statisticel error. Psychology is a facinating topic. Don't think we can cover it here. I am now looking up the survey /case you referred to. Thanks.
Goodhart (author)  craftyv6 years ago
It just bothers me that there are so many "variables" involved and then, on top of it all, the problematic "associations" that are placed on results: 60% of all men shave with a blade, therefore the 40% using electric are well off enough to buy electric razors.....WHAT?

Or worse yet, 60% of men use blades, and 60% of men wear boxer shorts, therefore men that wear boxer shorts use blades. Such fallacies are rampant in the business from what I can see.

The two examples I wrote in here are completely fictional of course just illustrations of how people abuse statistics...but I am sure you are well aware of all of this.
Questionnaires have improved tremendously since 1930, and some are immensely clever such as the MMPI-II.

Psychological and sociological statistics and methodology have also improved since 1930.

Here's one critique of the study's methodology and the interpretation of the results.  There are many.

Bottom line:  Allowing Chinese people to stay in a hotel does not mean that the hotel owners weren't racist.  Saying you're going to refuse a Chinese couple accommodations does actually mean something, and those results wouldn't be replicated again today.  Regardless, racism existed then and still does today.
Goodhart (author)  AngryRedhead7 years ago
Yes, racism did and does exist. The problem was the "reason" behind one answer and the opposing action. There could have been many "reasons" for the person to give a specific answer (someone else present in the room, one's upbringing, one's "idea" of what "Chinese" were as opposed to what reality was, etc.) and sadly all these externals exist today. Which is why (say) Fox news can take one cross sectional poll and get one answer, and CNN can do the same poll and get another; when in reality, neither is getting the "facts". Even using the method of asking the same question, but hidden with a different wording, can only bring out that the person is not being honest, but rarely exactly what he or she thinks. All too often, people "answer" what they think the "answer should be" rather then being honest.. A recent example using methods that "should" have ferreted out reality, but only seemed to confuse it for me: someone I know, believes I have a very "Liberal" stance politically, I have always thought I leaned a bit conservative, and a questionnaire that was supposed to make this determination, rated me as Moderate - middle of the road. I just don't put a lot of faith in polls, I guess.
This is getting very, very messy.

CNN and Fox News polls aren't valid, reliable surveys if only because of their populations.  Consider who watches CNN vs. Fox News.  CNN is generally a bit more moderate than Fox News.  The populations are different.  Consider who is willing to call in and give a response.  There are a lot of people who aren't calling in and giving their opinion.  Consider all the people who don't watch CNN or Fox News!!  There are a lot of people who don't give a crap about the news but are still willing to have an opinion on any given issue, and there are people who get their news through other sources such as newspapers, NPR, The Daily Show, etc.  Having no opinion is still valid, but generally people won't call in to give their opinion as "no opinion" especially when the options are "yes" or "no".  When I see the questions that they use, I usually think, "You can't ask that!  That question is completely illogical, unfair, and loaded!"

Another huge problem with news coverage, especially with the likes of CNN and Fox News, is that they're privately funded by large companies through advertising revenue.  A lot of news simply goes uncovered by major news sources which is why there are books being published by Project Censored.  There are simply questions that are never asked, and there are a lot of questions that are loaded with results that could be spun in a few different ways.

I have no idea what poll you took, but there's a very good chance that it isn't valid or reliable.  For the poll to be valid, it needs to be reliable, but the political views of an individual can change wildly from one day to the next which hurts reliability and, consequently, validity for any measure of political views/stance.  And also consider that most surveys, questionnaires, polls, indexes, etc. aren't really intended for analyzing the individual.  Most only make sense when using a large population so that the researcher can find an average or a median and run ANOVA tests and the like.

Please read the "Theoretical Critique" section in the article I linked.  Comparing LaPiere's findings to CNN and Fox News poll results simply doesn't make sense.
Goodhart (author)  AngryRedhead7 years ago
Here is the problem: all too often "polls" are cited as proof of something: commercials use them.....and by polling enough "segmants" they can eventually get a segmant where "4 out of 5 doctors recommend...". Yes, I do not watch "network news" because of the bias. I get most of my news from NPR, and public TV. It wasn't that I was making the comparison so much as saying this is what the "public" hears, and tends to follow. At least, that has been the experience I have had, both locally, and "online" where people claim to care about something. :-) Sorry if I ruffled feathers, I had no intention of doing so...
I'd say that applies to the likes of most questionnaires, I'd say people idealize themselves a lot on paper, taking the opinion of what they'd like to be or do but don't. Or follow loaded questions...
Goodhart (author)  killerjackalope7 years ago
That is the conclusion they reached with the experiment: that people say "what they thought others wanted them to say" rather then what they really would do or what they really think. This makes nearly every poll a farce IMHO.
Makes focus groups an odd thing, made because gave in depth and useful answers, does that make them more imaginative then?
Goodhart (author)  killerjackalope7 years ago
I guess it would depend on how "freeflowing" the conversation was....if more like a brain storming session, it might produce something :-)
1-10 of 25Next »