good poll questions

Are you looking to level up, or just have a good laugh? I had a blast compiling this list of funny poll questions.

Because PPP made no qualms about openly rooting for President Barack Obama’s re-election, many of the questions — such as asking Mississippi and Alabama residents if they believe Obama is Muslim — infuriate those on the right.
In October, after a much-hyped video release of a 2007 Obama speech on The Drudge Report and The Daily Caller, PPP asked Wisconsin residents a series of questions on the topic.
With the 34 percent of voters remaining here on earth Obama leads Romney 53-35, Gingrich 56-31, and Palin 61-26.
It found that an abnormally high 15 percent of Ohio Republicans believed Romney did, which earned PPP a fair amount of conservative criticism.

What happens if another carefully done poll of 1,000 adults gives slightly different results from the first survey? Neither of the polls is "wrong." This range of possible results is called the error due to sampling, often called the margin of error.
The number of calls made – a push poll makes thousands and thousands of calls, instead of hundreds for most surveys; The identity of who is making the telephone calls – a polling firm for a scientific survey as opposed to a telemarketing house or the campaign itself for a "push poll;" The lack of any true gathering of results in a "push poll," which has as its only objective the dissemination of false or misleading information.
Results of other polls – by a newspaper or television station, a public survey firm or even a candidate’s opponent – should be used to check and contrast poll results you have in hand.
Thus, for example, a "3 percentage point margin of error" in a national poll means that if the attempt were made to interview every adult in the nation with the same questions in the same way at the same time as the poll was taken, the poll’s answers would fall within plus or minus 3 percentage points of the complete count’s results 95% of the time.
Because polls give approximate answers, the more people interviewed in a scientific poll, the smaller the error due to the size of the sample, all other things being equal.
In a "push poll," a large number of people are called by telephone and asked to participate in a purported survey.

Poll Everywhere lets your audience respond using text messages (SMS), Twitter, or the web in real-time.

If you take the time to write good survey questions, you’ll be well on your way to getting the reliable responses you need to reach your goals.
Writing survey questions that bias respondents toward one answer violates a survey’s objectivity and biases the answers you get to your questions.
If you don’t explain what you’re talking about, you risk respondents getting frustrated and quitting your survey, or, even worse answering the question randomly.
Writing Good Survey Questions Get reliable results and actionable insights from your surveys.
Read through these and you’ll be writing survey questions like a pro in no time.

Most good telephone surveys of the general public use what is called a random digit dial (or “RDD”) sampling technique to generate the sample of phone numbers used in the survey.
When it comes to congressional elections in the off-years, the generic ballot question asked by the Pew Research Center is: “If the elections for U.S. Congress were being held TODAY, would you vote for the Republican Party’s candidate or the Democratic Party’s candidate for Congress in your district?” where the order of “Republican Party’s candidate” and “Democratic Party’s candidate” are randomized.
Nearly all of the surveys conducted by the Pew Research Center now include people who only have cell phones (see About Our Survey Methodology in Detail for more information).
You can view the most commonly asked demographic questions in Pew Research Center for the People & the Press surveys, in the order we ask them, here.
By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between).
Telephone numbers for Pew Research Center polls are generated through a process that attempts to give every household in the population a known chance of being included.
For example, in January 2003, we asked this question on form 1: “Would you favor or oppose taking military action in Iraq to end Saddam Hussein’s rule?” On form 2, we asked: “Would you favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties?” In this experiment, the form 1 question found 68% favored removing Hussein from power.
Many survey organizations, including Pew Research Center, use such questions to help gauge the size of possible swings in party representation in Congress.
We typically ask it first in the survey because we do not want any other questions to affect respondents’ answers to that question.
(See Why probability sampling for more information.) And more specifically, the kinds of people who might volunteer for our polls are likely to be very different from the average American – at the least they would probably be more politically interested and engaged.
This chance, however, is only about 1 in 154,000 for a typical survey by the Pew Research Center for the People & the Press.
Because people in households with no telephone service are less likely than others to vote, their omission has not seriously damaged the accuracy of pre-election polls.
For example, if the survey first asks about the economy and then asks about presidential approval, the respondent may still be thinking about the economy when answering the latter question.
As the proportion of Americans who rely solely or mostly on cell phones has continued to grow, sampling both landline and cell phone numbers helps to ensure that Pew Research surveys represent nearly all adults.
Once we’ve completed a survey, we adjust the data to correct for the fact that some individuals (e.g., those with both a cell phone and a landline) have a greater chance of being included than others.

Would you rather always take a cold shower or sleep an hour less than you need to be fully rested? Would you rather always get first dibs or the last laugh? Would yo…u rather always have to say everything on your mind or never speak again? Would you rather always lose or never play? Would you rather always wear earmuffs or a nose plug? Would you rather always win pie-eating contests or always win wheelbarrow races? Would you rather be 3 feet tall or 8 feet tall? Would you rather be 3 feet taller or 3 feet shorter? Would you rather be a deep sea diver or an astronaut? Would you rather be a dog named Killer or a cat named Fluffy? Would you rather be a giant hamster or a tiny rhino? Would you rather be a tree or live in a tree? Would you rather be able to hear any conversation or take back anything you say? Would you rather be able to read everyone’s mind all the time or always know their future? Would you rather be able to stop time or fly? Would you rather be an unknown minor league basketball player or a famous professional badminton star? Would you rather be born with an elephant trunk or a giraffe neck? Would you rather be forced to tell your best friend a lie or tell your parents the truth? Would you rather be forgotten or hatefully remembered? Would you rather be go about your normal day naked or fall asleep for a year? Would you rather be gossipped about or never talked about at all? Would you rather be hairy all over or completely bald? Would you rather be happy for 8hrs/day and poor or sad for 8hr/day and rich? Would you rather be invisible or be able to read minds? Would you rather be rich and ugly, or Poor and good looking? Would you rather be stranded on an island alone or with someone you hate? Would you rather be the most popular or the smartest person you know? Would you rather be the sand castle or the wave? Would you rather eat a bar of soap or drink a bottle of dishwashing liquid? Would you rather eat a handful of hair or lick three public telephones? Would you rather eat a stick of butter or a gallon of ice cream? Would you rather eat a stick of margarine or five tablespoons of hot pepper sauce? Would you rather eat poison ivy or a handful of bumblebees? Would you rather end hunger or hatred? Would you rather find true or 10 million dollars? Would you rather forget who you were or who everyone else was? Would you rather get caught singing in the mirror or spying on your crush? Would you rather get even or get over it? Would you rather give bad advice or take bad advice? Would you rather give up your computer or your pet? Would you rather go to an amusement park or to a family reunion? Would you rather go without television or junk food for the rest of your life? Would you rather have a beautiful house and ugly car or an ugly house and beautiful car? Would you rather have a kangaroo or koala as your pet? Would you rather have a missing finger or have an extra toe? Would you rather have one wish granted today or three wishes granted in 10 years? Would you rather have x-ray vision or bionic hearing? Would you rather invent a cure for cancer or a cure for AIDS? Would you rather kiss a jellyfish or step on a crab? Would you rather know it all or have it all? Would you rather live without music or live without T.V.? Would you rather and not be loved back, or be loved but never ? Would you rather make headlines for saving somebody’s life or winning a nobel prize? Would you rather meet an alien visitor or travel to outer space? Would you rather never use the internet again or never watch TV again? Would you rather not be able to use your phone or your e-mail? Would you rather only be able to whisper or only be able to shout? Would you rather own a ski lodge or a surf camp? Would you rather publish your diary or make a movie on your most embarrassing moment? Would you rather spend the day surfing the internet or the ocean? Would you rather sweat moderately but constantly 24 hours a day all over your body or have a metal pin in your jaw that constantly picks up talk radio stations? Would you rather die from falling off a cliff or by being threatened.

The CBS News Poll has been asking questions of the public for over thirty years, and it seemed about time for us to ask the people who follow our polls what they would like to know.
The wording of the questions was determined by the CBS News Polling Unit, wherever possible matching that of questions previously asked in CBS News Polls to facilitate trend comparisons.
To that end, CBSNews.com solicited suggestions from the public for topics to ask about in our CBS News/New York Times 100 days poll that was released last night.

In 2013 our questions ranged from topical issues being discussed on the email discussion list and community forums, to classic questions that have created debate for years among technical communicators and content pros.
To help celebrate TechWhirl’s 20th anniversary, we published a poll question on the big debates that have continued, and should continue, on the field of technical writing.
TechWhirl publishes a wide variety of poll questions throughout the year, with a couple of goals in mind: starting some conversations around topics and emerging trends, and learning a bit more about our community.
It included the most popular poll question of the year, on ways to encourage SME cooperation in getting those technical reviews done.

Other examples of self-selected samples are poll questions that appear in newspapers, on web pages, and in magazines, or any poll likely to have a high percentage of members of the general public not able or not motivated to make the effort to respond.
Questions are checked for balance: Are they worded in a neutral fashion, without taking sides on an issue? Does the question represent both sides of an issue fairly? Answer choices read to poll respondents must also be balanced; e.g., approve or disapprove, favor or oppose.
A reliable sample selects poll respondents randomly or in a manner that insures that everyone in the area being surveyed has a known chance of being selected.
One step taken to insure that all members of a population have a known chance of being selected includes dialing randomly generated phone numbers to insure that people with unlisted phone numbers are included along with people with listed numbers.
The National Council on Public Polls ( ), an association of polling organizations, has prepared these answers to frequently asked questions about survey research.
Another is a poll based on a sample that selects itself, such as a radio station call-in poll, which rules out people not caring to respond or not even listening to the station.

Could your goal be accomplished using only level of agreement questions, or all level of satisfaction questions, or do you need to have both? If so, try to group like questions together, so there is less work for respondents to pay attention to rating scale changes every time they approach a new question.
Adobe Presenter is a great tool to use if you wish to display rich information right beside the questions you are asking in your poll or survey.
Using the flexibility of the layouts and pods in Connect Pro, simply organize your room so that participants can clearly see both the graphic in question and the poll at the same time, and so it is clear what you are asking.
People may take their responses to your survey questions very seriously, and will not want to answer if the question does not apply or align with their views.
Run the survey questions past a sample set of people who are representative of your audience, and ask each person what the question means to him.
Polls and surveys are an easy way to learn more about your audience and their experience level, check whether they can recall important information youve presented, or wish to gather feedback on the efficacy of your sessions.
If a list of items needs to be ranked, and requires a specific per-item ranking, a user may opt out of the question or the entire survey at this point because this is just too much work.
Use multiple-choice when you wish to narrow down the selection to only one possible answer, such as a question to indicate your company size in the majority of cases this has a single, specific answer for each participant.
As you formulate standard poll and survey questions, refer to the guidelines below to build effective polls and surveys.

You will be given random questions and answers, after answering the question, you will get the statistics of how other people have answered to see how different you are from other people in general.
Weirdpoll is a free just-for-fun website that contains thousands of interesting questions: would you rather questions, and other funny questions.

We won’t call back people who took your survey (or who started to take your survey), but we will call back phone numbers that were busy, did not answer, or that went straight to an answering machine.
The downloaded results include any columns that were present in the original phone list; plus answers to each question; plus information about the call, including its final status.
This includes making sure that you’re allowed to call the phone numbers that you provide, and meeting any requirements about the content of your survey.
Note that it’s possible that we overshoot the number of completes a bit, because we’ll stop calling additional numbers when you hit your target – but some of the ongoing phone calls may turn into additional completes.
Each time you run a survey, we randomly pick which number to dial next from your phone list.
If your survey has not hit its completion criteria (a certain number of completes, or finished calling everyone on a list) and it is later than the allowed times to call for the day, we will stop calling.
Automated phone surveys are phone calls where a recorded voice asks you questions and you type in responses on your keypad (e.g. "Who will get your vote for mayor? Press 1 for Joe…"). This provides a fast and affordable way to get answers from real people.
We provide a phone tree explaining why they were called, and give them an option of adding their phone number to our system-wide Do Not Call list.
We deduct 10 cents when the call is dialed; you pay the same regardless of the outcome (e.g. if a person picks up and completes a full survey, or it’s an answering machine, or if it’s busy).
We allow you to email simple instructions to your voice talent, so all they have to do is call a phone number and read from a script.

that would get more answers you want.
try some stupid questiones .

First, many parts of the health care legislation have already gone into effect, so it seems impossible to "keep [the ] from going into effect." Thus, I take the item to suggest that there are specific parts of the that should be repealed before they go into effect and that everything that’s already in effect is okay; however, if I (or respondents) have to make assumptions about the question’s intent, then it’s a bad question.
What aspect of the legislation should be repealed? Is it the part about pre-existing conditions, the individual mandate, or the part about adult children being able to stay on their parent’s health insurance? The public should be skeptical of health care generalities because people may disagree with the narrative of politicized health care legislation, but not with the specifics of it.

Sorry, this post has been promoted by BuzzFeed editors so it can no longer be edited or deleted.

Immediately prior to asking about the legitimacy of the DOJ’s actions, Fox News asked: Does it feel like the federal government has gotten out of control and is threatening the basic civil liberties of Americans, or doesn’t it feel this way to you? There is a long track record in the survey industry of how framing a question in a particular context can affect responses, and there is little doubt that raising this broad criticism of government – which roughly two-thirds (68%) of respondents agreed with – prior to the DOJ question had some effect.
While the Pew Research Center, CNN/ORC and Washington Post/ABC News pollsters all took a similar approach in asking whether people felt the Department of Justice was right or wrong to subpoena the Associated Press reporters’ phone records, there were multiple differences in the phrasing, structure and context of the questions that help to explain the different findings.
By comparison, the Washington Post/ABC News survey described the situation as follows: The AP reported classified information about U.S. anti-terrorism efforts and prosecutors have obtained AP’s phone records through a court order.
In this context, far more respondents offered an opinion when then probed about the appropriateness of the DOJ’s actions – just 5% offered no opinion on the CNN/ORC survey, compared with 15% on the WP/ABC survey, and 20% on the Pew Research survey.
First, Pew Research referred to the decision to subpoena phone records while the WP/ABC survey said the records were obtained… through a court order, which may make the actions seem more legitimate by highlighting a court’s involvement.
When three different polling organizations conducted surveys last weekend to gauge public reaction to the news about the Department of Justice’s subpoenas of reporters’ phone records, their findings were quite different – a case study in the challenges pollsters face in a breaking news environment when public attention and information is relatively limited.
All in all, this question found more saying the DOJ’s actions were unacceptable than acceptable by a 52% to 43% margin, similar to the balance of opinion in the Pew Research survey.
Where the first three polls said the DOJ subpoenaed, obtained through a court order, or secretly collected the phone records, the Fox News question used the phrase secretly seized to describe the DOJ’s actions.
Moreover, the CNN/ORC survey is the one out of these three that found a substantial difference of opinion along partisan lines: Republicans were about half as likely as Democrats to say the DOJ’s actions were acceptable (30% vs.

Take a moment to review the information and then look at your webinar polling questions and answer options one more time.
As the results from your polling questions come in, take the time to share the results with your participants.
Often times, those results can help you further tailor and refine the content of your webinar to fit the experience levels and needs of your participants.
Optimize Your Webinar Spending with the Webinar ROI Calculator As a marketer, you’re probably asked to provide weekly results in order to justify your budget and efforts.

    This elementary term must be properly understood before we go further.  "Sampling error" is a built-in and unavoidable feature of all proper polls.  The purpose of polls is not to get direct information about a sample alone.  It is to learn about the "mother set" of all those from which a poll’s sample is randomly drawn.2  This "population" consists of everyone or everything we wish to understand via our sample.  A particular population is defined by the questions we ask.  It might be "all flips of a given coin" or "all presidential election voters in the 2008 American general election" or "all batteries sold by our firm in calendar 2008" or "all aerial evasions of predatory bats by moths" or "all deep-sky galaxies" or any number of other targets.  The object is not to poll the whole population, but rather to draw a sample from it and directly poll them for sake of authoring an "inference" or judgment about that population.  But all samples have an inherent property:  they fluctuate from one sample to the next one as each is drawn at random from the elements of the targeted population.  This natural property is "sampling error" or "margin of error" (Mystery Pollster:  What does the margin of error mean?).  These are not surveyor’s mistakes, but rather are inherent properties of all sampling (SESTAT’s Understanding Sampling Errors).  Cautions on reading and interpreting these are at PollingReport’s Sampling Error (Taylor 1998) or Robert Niles’ Margin of Error.
    A push poll is a series of calls, masquerading as a public-opinion poll, in which the caller puts out negative information about a target candidate (Push poll – Wikipedia).  Sometimes called robo-calls, the auto-call from a supposed polling operation spews out derogatory information about a specific target.  They call very large numbers of households to disseminate as much derogation as possible (Blumenthal 2006b, A Real Push Poll?", 8 September 2006).  They appear before presidential primary and general elections and in swing district congressional or senatorial contests, always by hard-to-trace nominally independent organizations not directly linked to the beneficiary candidate or party.7  They are quite common in recent elections.  Obviously someone in campaigns makes use of these shadow practitioners.  The operative most closely identified with their use is former Bush political strategist Karl Rove, suspected as director of the infamous February 2000 South Carolina accusatory telephone "polls" maligning Bush primary rival John McCain (Push poll – SourceWatch; Green 2007, The Rove Presidency; Moore and Slater 2006; NPR Karl Rove, ‘The Architect’ interview with Slater, 2006; Green 2004, Karl Rove in a Corner; Borger 2004, The Brains; Davis 2004, The anatomy of a smear campaign; Suskind 2003, Why are These Men Laughing?; DuBose 2001, Bush’s Hit Man; Snow 2000, The South Carolina Primary).  That did not end the practice despite the expose.  The 2006 midterm saw a spate of these (Drew 2006, New Telemarketing Ploy Steers Voters on Republican Path – New York Times, 12/6/06).  On eve of the 3 January 2008 Iowa caucuses, Republican rivals of Mike Huckabee received such calls (Martin 2007, Apparent pro-Huckabee third-party group floods Iowa with negative calls – Jonathan Martin’s Blog – Politico.com, 12/3/07).  One may expect another round of these in fall 2008 before the 4 November election of a 44th president and the 111th Congress.
    First, the questions must be worded in a clear and neutral fashion.  Avoid wording that will bias subjects toward or away from a particular point of view.  The object is to discover what respondents think, not to influence or alter it.  Along with clear wording is an appropriate set of options for the subject to choose.  It makes no sense to ask someone’s income level down to the dollar; just put in options that are sufficiently broad that most respondents can accurately place themselves.  A scan of good polls generally shows the "no opinion" option as well.  That’s to capture the commonplace fact that many people have no feelings or judgments one way or the other on the survey question.  If obliged to choose only from "True" or "False," many who have no opinion will flip the coin and check off one of those options.  Thus a warning:  the business of fashioning truly effective survey questions is not easy.  Even the best polls have problems with fashioning their questions to avoid bias, confusion, and distortion (Asher 2001, 44-61).  Roper illustrates this via a confusing double negative causing a high proportion of respondents to opt for a Holocaust-denial reply, whereas a more clearly worded question showed that this radical view is held by a tiny proportion of respondents (Ladd 1994, Roper Holocaust Polls; Kagay 1994, Poll on Doubt Of Holocaust Is Corrected – The New York Times).  It usually takes a professional like Professor Ladd to parse out such distinctions in question wording among valid polls.  This is where determined issue advocates can be valuable, because many watch out for subtle differences in question wording that can alter responses to the advocate’s pet issue (for example, Mooney 2003, Polling for Intelligent Design).  But with some practice it’s still feasible for any alert reader to see the difference between properly worded questions and the rest.
    Bad polls on the web do not include election results but are nonetheless remarkably abundant.  These fall into two basic categories.  First are amateur bad polls.  The web is positively overflowing with these.  These show self-selection and other errors like small sample sizes or badly worded questions.  Some are simply interactive web pages created for fun and dialogue with others.  They often make no pretense of being legitimate surveys.  Some are self-evidently not serious.  They all tend to have certain common signs of amateurs at work.  For one, there are frequent wrongly spelled words.  For another, the questions are worded in vague or unclear ways that may be typical of everyday speech but are strictly not allowed at legitimate polling sites.  Sometimes these are humorous sites with gonzo questions about a variety of current news items, especially those of salacious or bizarre nature.  Others are accompanied with blogs that really amount to ranting licenses.  Amateur bad polls are very easy to recognize on a little inspection.  Their samples are running tallies determined by whoever has chosen to participate one or more times.  They lack any "sampling error" because they’re just running tallies of recorded responses, not samples taken at random from a population.
    A website example of this practice is PulsePoll.community Network, which in Spring 2000 ran four pre-primary polls for the New Hampshire, Arizona, Washington and Colorado presidential primaries (at PulsePoll Primary: Arizona Results).  They got very similar results to four scientific telephone-based polls taken on the eve of these four events.  So they concluded that "The PulsePoll has made Internet polling history" with a web poll emulating telephone surveys in its forecasting accuracy.  But this claim does not bear close examination.  Objections from professional survey sources came in immediately.  Some are captured in Jeff Mapes’ article of 12 April 2000 entitled "Web Pollster Hopes To Win Credibility" in PulsePoll.com News The Oregonian.  Even if four spring 2000 primary polls did closely resemble legitimate survey results, that could be pure luck.  One should remember that the Literary Digest also used wrong sampling methods to correctly pick presidential winners in four straight elections from 1920 through 1932 (Rubenstein 1995, 63-67).  But they made one major mistake.  In 1936 they predicted a fifth one–and got it spectacularly wrong.  Luck has a natural way of eventually running out.
    Second, the subjects in the sample must be randomly selected (Research Methods Knowledge Base:  Random Selection & Assignment).  The term "random" does not mean haphazard or nonscientific.  Quite the opposite, it means every subject in a targeted or parent population (such as "all U.S. citizens who voted in the 1996 general election for president") has the same chance of being sampled as any other.  Think of it like tumbling and pulling out a winning lottery number on a State of Kentucky television spot; they are publicly showing that winning Powerball numbers are selected fairly by showing that any of the numbers can emerge on each round of selection (Kentucky Lottery).  Fairness means every number has identical likelihood of being the winning number, no matter what players might believe about lucky or unlucky numbers.  So "random" means lacking a pattern (such as more heads than tails in coin flips, or more of one dice number than the other five on tumbled dice) by which someone can discover a bias and thereby predict a result (Random number generation – Wikipedia).  That’s a powerful property, as only random selection is truly "fair" (unbiased on which outcome occurs).  Any deviation from random produces biased selection, and that’s one of the hallmarks of bad polls.
&    Some advocacy groups attack legitimate pollsters and polls by distorting their data and purposes.  A Christian conservative group with the name Fathers’ Manifesto produced The Criminal Gallup Organization to attack this well-known and reputable pollster for alleged misrepresentation of American public opinion on legalized abortion.  They said "The fact that almost half of their fellow citizens view the 40 million abortions which have been performed in this country as the direct result of an unpopular, immoral and unconstitutional act by their own government, as murder, is an important thing for Americans to know.  This is not a trivial point, yet the Gallup Organization took it upon itself to trivialize it by removing any and all references to these facts from their web site." (Abortion Polls by the Criminal Gallup Organization)  That was followed with a link to the offender’s URL at , now a dead URL.  The truth is far simpler than conspiracy.  In late 2002, Gallup went private on the web with nearly all its regular issue sets, not excepting abortion.  One will only know this by escaping the confines of an advocate group’s narrow perspective and seeing the targeted poll and pollster’s own take on the issue.  And that can now readily be done, via the newer Gallup site’s search using "abortion polls."  That produces an Abortion In Depth Review summary of numerous polls dating from 1975 at this URL:  .
    These dirty campaign practices masquerade as legitimate polls.  They are not inquiries into what respondents truly think.  Traugott and Lavrakas (2000, 165) define them as "a method of pseudo polling in which political propaganda is disseminated to naive respondents who have been tricked into believing they have been sampled for a poll that is sincerely interested in their opinions.  Instead, the push poll’s real purpose is to expose respondents to information … in order to influence how they will vote in the election."  Asher (2001, 19) concurs:  "push polls are an election campaign tactic disguised as legitimate polling."  Their contemporary expression through automated telephone calls led Mark Blumenthal of Mystery Pollster to call them "roboscam," meaning an automated voice asks respondents to indicate a candidate preference, followed by a scathing denunciation of the intended target (Blumenthal 2006a, Mystery Pollster – RoboScam: Not Your Father’s Push Poll, 21 February 2006).  After a couple of attack-statements, it’s on to another number, hitting as many as possible for sake of maximizing the damage to the intended political target.  That, of course, is not real polling at all, which explains why Blumenthal shuns the very term "push poll" for these.
    How does one detect these false jewels?  Not simply by looking at the sample’s selection.  The Hootie Poll obtained a proper sample in the proper way, thus avoiding the most common reason for a "bad poll" label.  They also did not launch an immediate attack on a target the way robo-calls do.  Instead, questions worded in a deliberate leading way are the surest sign of these ugly polls.  Watch for loaded or biased questions somewhere in the question sequence.  Of course, most of these polls are done by telephone since that’s still the prevalent means of doing legitimate surveys; so in person one must wait out the innocuous queries before discovering the push component.  Once that does show up, ask yourself if that question or statement would be permitted in court of law without an objection from the subject’s counsel (or the judge).  If "objection!" followed by "sustained!" come to mind, you’re probably looking at an ugly poll.  These deserve no more of your time, and should be publicly given the contempt they so richly deserve.
    Legitimate polling organizations universally condemn push polls.  The National Council on Public Polls has shunned them since they masquerade as legitimate queries yet are intended to sway rather than discover the opinion of respondents (NCPP 1995, A Press Warning from the National Council on Public Polls).  So has the American Association for Public Opinion Research, which recommends that the media never publish them or portray them as polls (AAPOR 2007, AAPOR Statement on Push Polls).  Push polls are propaganda similar to negative advertising.  They are conducted by professional political campaign organizations in a manner that detaches them from the intended beneficiary of actions taken against a rival (see Saletan 2000, Push Me, Poll You in Slate Magazine).  Some political interest groups also use them, often in a hot-language campaign to raise money and membership by using scare tactics.  No matter the source, they treat their subjects with contempt.
    Next, remember this:  any self-selected sample is basically worthless as a source of information about the population beyond itself.  This is the single main reason for the famous failure of the Literary Digest election poll in 1936, where the Digest sampled 2.27 million owners of telephones and automobiles to decide that Franklin Roosevelt would lose the election to Republican Alfred Landon, who’d win 57 percent of the national popular vote (History Matters, Landon in a Landslide: The Poll That Changed Polling).  Landon didn’t!  Dave Leip’s Atlas of Presidential Elections, 1936 Presidential Election Results, displays the 36.54% won by Landon below the 60.80% of national popular vote won by the incumbent Roosevelt.  This even though the Digest had affirmed of its straw poll:  "The Poll represents the most extensive straw ballot in the field–the most experienced in view of its twenty-five years of perfecting–the most unbiased in view of its prestige–a Poll that has always previously been correct." (Landon in a Landslide)  Yeah, but a lot of 1936 depression-era Roosevelt voters didn’t own telephones or automobiles so never received the opportunity to voice their opinions.
    If all three of these criteria are met, you have reasonable assurance the poll is good.  How can you know this?  Expect all poll reports to honor the journalists’ rule.  They must cite all the information necessary to let you confirm the three conditions.  Even a brief news report can cite the method of selection (such as "nationwide telephone sample obtained by random digit dialing, on October 5-6, 1996"), the sample size and sampling error (1000 subjects, with sampling error of plus-or-minus 3.1 percent at a 95% confidence level), and the questions used in that survey.  For more extended print articles there are fuller guidelines (Gawiser and Witt undated, 20 Questions A Journalist Should Ask About Poll Results, Third Edition).  Still, most reports of poll results will not reproduce the poll questions in full for you to see; too little space in papers, too little time on television or radio.  So they must provide a link to the original source for the full set of questions.  With websites now universally available, no pollster can plausibly slip that responsibility.  Neither can any reputable news organization.
    Public opinion polls or surveys are everywhere today.  A nice sampling of professional surveyors is at Cornell Institute for Social and Economic Research (CISER), Public Opinion Surveys.  The Wikipedia Opinion poll site has history and methods of this emergent profession that was pioneered in America, and its Polling organizations lists some globally distributed polling organizations in other countries.  PollingReport.com compiles opinion poll results on a wide array of current American political and commercial topics.  USA Election Polls track the innumerable election-related polls in the election-rich American political system.  The National Council on Public Polls (NCPP) defines professional standards for and lists its members–but many polls online and off do not adhere to such standards.
    Rule One in using website polls is to access the original source material.  The web is full of polls, and reports about polls.  They are not the same thing.  A polling or survey site must contain the actual content of the poll, specifically the questions that were asked of participants, the dates during which the poll was done, the number of participants, and the sampling error (see next section below).  Legitimate pollsters give you all that and more.  They also typically have a website page devoted to news reports based on their polls.  The page will include links for the parent website, including the specific site of the surveys being reported.  So anyone who wants to directly check the information to see if the report is accurate, may easily do so on the spot.
    Biased samples can also be dangerous to democratic standards of voting for public office.  The most important self-selected population in the political world is the voting citizenry in democratic elections.  Serious political elections are obliged to follow three strict standards of fairness:  each individual voter gets to vote only once, no voter’s ballot can be revealed or traced back to that person, and every vote that is cast gets counted as a cast vote in the appropriate jurisdictional locale.  Internet voting is heralded as a coming thing, but so far the experience with it is studded with instances of ballot tampering by creative hackers.  That tampering is a violation of the third condition, that cast votes are counted properly.  ElectionsOnline.us–Enabling Online Voting (URL: ) assures us that it "makes possible secure and foolproof online voting for your business or organization," but hackers have demonstrated that security is a relative term.  AP Wire 06-21-2003 UCR student arrested for allegedly trying to derail election cites a campus hacker who demonstrated in July 2003 how a student election for president could be altered through repeat voting.  That’s documented online by Sniggle.net: The Culture Jammer’s Encyclopedia, in their Election Jam section (URL:  sniggle.net/index.php > sniggle.net/election.php); and there are other sources as well.
    The result was satisfying for CEO Johnson and unsatisfying for Burk.  One conservative advocacy group took the survey and ran with it (Center for Individual Freedom, Augusta National Golf Club Private Membership Policies under title "Shoot-Out Between Hootie and the Blowhard Continues").  Conway herself accompanied Johnson at a November 13, 2002 press conference to announce the poll result, which had an 800-person-based sampling error of 3.5%.  As portrayed in the official PGA website (Poll shows support for Augusta’s right to choose membership – PGATOUR.COM): "When asked whether — like single-sex colleges, the Junior League, sororities, fraternities and other similar same-sex organizations — "Augusta National Golf Club has the right to have members of one gender only," 74 percent of respondents agreed.  Asked whether Augusta National was "correct in its decision not to give into Martha Burk’s demand," 72 percent of the respondents agreed.’"  That would appear to wrap the matter up.
    Remember also that questions are half the story.  The other half is the set of responses available to the polled.  Another "ugly" sign is that respondents face choices designed to help ensure the pre-ordained response sought by the alleged pollster.  This is not done only by campaign organizations seeking to impeach a rival.  It is also done at web poll sites, sometimes in a rankly biased but amateur manner.  This is richly displayed at Opinion Center from Opinion Center.com.  One has to sample their fare to see how biased it truly is.  Here is one example that shortly followed the 2003 death of actress Katherine Hepburn:  "Everyone talks about how Katherine Hepburn was such a role model.  She wore pants, had a long affair with a married man, never had kids and never married.  Is this a good role model?"  The respondent is left to choose only a "yes" or "no" response to this rant.
    However, their standard internet polling site (PollingPoint – A Nationwide Network of Millions of People Inspiring Public Debate) invites the usual website visitors’ indulgence in online polling, with results showing almost nothing about resultant sample size, sampling error, or comparability to other polls.  This is still self-selected sampling rather than random selection.  I believe the jury is out; there is yet no consumer-linked warrant to inspire confidence in the results obtained by this method.
    Now suppose your self-selected sample is very large, and you cannot study all of it.  Then define that total sample as your population (called "all site visitors"), and seek a sample within it for intensive study.  But that takes random sampling from the population.  Inviting some of your site visitors to fill out surveys won’t tell you about "all site visitors."  Instead you get the relative few who bother to reply, and they are probably untypical of the rest.  So smart sellers who really want to know all their traffic seek to establish a full list of all customers–by posting cookies to their computers, by getting telephone numbers at checkout counters to produce comprehensive customer lists, or by telling you to go online to get a warranty validated whereupon you must show them an email address and telephone to get the job done.  Understand, though, that smart businesses do this to avoid hearing only from an untypical few of their customers.
    Granted, national pollsters cannot literally select persons at random from all U.S. citizenry or residents, because no one has a comprehensive list of all names (despite what conspiracy theorists want to believe).  So they substitute a similar method, of random digit dialing or "RDD" based on telephone exchanges (Random digit dialing – Wikipedia).  Or the U.S. Census Bureau will do block sampling; that is, they will randomly select city or town blocks for direct contact of sample subjects (Data Access Tools from the Census Bureau; or direct to Accuracy of the Data 2004).  Emergent web polls do the same from their mother population of potential subjects.  These honor the principle of pure random selection by coming as close to that method as available information allows.
    Internet polling is nonetheless here to stay.  By 2003 it had taken a quantum jump in publicity and material impact.  Even groups that know better will use it.  The Berkeley, California organization known as MoveOn.org ran an online vote among its membership on June 24-25, 2003 to determine which among the Democratic presidential candidates its membership preferred (MoveOn.org PAC at URL: ).  The results was a strong plurality for outspoken anti-Iraq War candidate Howard Dean, with 43.87% of 317,647 members who cast votes in this 48-hour period (Report on the 2003 MoveOn.org Political Action Primary).  The second-place result was nearly-unknown long-shot Dennis Kucinich, with 23.93% of the vote.  Near the bottom, the well-known candidates Joseph Lieberman and Richard Gephardt got 1.92% and 2.44% respectively!  What can be concluded from this?  Self-selection of a highly left-wing participant voter pool is dramatically obvious.  Stark distinction between this group and the actual 2004 Democratic presidential primary voters was forthcoming soon thereafter (Democratic Party presidential primaries, 2004).  But the appeal of doing such polls is evident.
    Third, the survey or poll must be sufficiently large that the built-in sampling error is reasonably small.  Sampling error is the natural variation that occurs from taking samples.  We don’t expect a sample of 500 flips of a coin will produce exactly the same heads/tails distribution as a second sample of 500.  But the larger the samples are, the less the natural variation from one to another.  Common experience tells us this–or it should.  A sample of newborn babies listed in large city birth registers will show approximately (but not exactly) the same proportion of boys and girls in each city, or in one city each time the register is revisited; but in small towns there are large variation in boy-to-girl ratios.  Generally, we do not want sampling error to be larger than about 5 percent.  That requires about 400 or more subjects, without subdivisions among groups within the sample.  If you divide the sample evenly into male and female subgroups, then you naturally get larger sampling errors for each 200-person subgroup.  Ken Blake’s guide entitled "The Ten Commandments of Polling" provides a step by step guide to calculate sampling errors via calculator for any given sample size; and you can go on line to the DSS Calculator for that.  The sound theoretical grounding is in any standard book on statistics and probability, in manuals with scientific calculators, and in several websites listed below.
 The only defense for this is that internet users of this site were somehow typical of the larger population of citizens, or more particularly, of citizens who vote in presidential primaries.  The problem with this is already known:  internet users were not a random sample of all citizens, all voters, or all presidential primary voters.  See "The Digital Divide" spring 2003 theme issue of IT&Society (URL: ) for indications that digital users were still quite different by factors such as wealth and political activism from the non-digital population.  There is no doubt that digital users have been different, and often so in ways that especially attract both politicians and advertisers to them.  But even if the self-chosen PulsePoll sample somehow captured all the attributes of its parent population of digital users, those users still did not resemble the true target population of presidential primary voters.
    All polls are surveys based on samples drawn from parent populations.  A poll’s purpose is to make accurate inferences about that population from what is directly learned about the sample through questions the sampled persons answer.  Knowledge of the sample is just a means to that end.  All good polls follow three indispensable standard requirements of scientific polling.
1 This practice is noticeably violated in recent years by Investor’s Business Daily and their polling agency, Technometrica Institute of Policy and Politics (IBD/TIPP).  TIPP does polls available only to IBD, which produces deeply biased reports based on TIPP surveys with no direct or full link to that surveyor’s questions or methods of acquiring its samples.  Their practices and results are of doubtful value, to say the least.  Nate Silver reviews a notorious recent IBD/TIPP polls of doctors thusly:  "that special pollster which is both biased and inept.." (Nate Silver of FiveThirtyEight:  Politics Done Right at ibdtipp-doctors-poll-is-not-trustworthy, 9/16/2009).
    Incidentally, MoveOn.org, a knowledgeable organization on survey methods, engaged the professional services of a telephone polling organization to verify that its 317,647 votes were not biased through "stacking the ballot box" by anyone voting more than once.  To check this, a randomly selected sample of 1011 people from those 317 thousand were directly surveyed by telephone to ascertain that the sample results were remarkably close to those of the parent population.  That means if ballot stuffing were done at all, its effect was minor or negligible since the sample of 1011 was fundamentally similar in result to the population of 317,647 (Greenberg Quinlan Rosner Research, Inc.
    So if they are worthless, why are they so commonplace?  Self-selected polls are highly useful for certain legitimate but limited purposes.  Sellers always want to know more about their customers; but such customer surveys are necessarily self-selected rather than selection as a random sample.  Suppose you are an internet seller such as Amazon.  You try for a profile of customers by inviting them to give you some feedback.  This helps you discover new things about them, gives tips on who else you’d want to reach, alerts you to trouble spots in advance, and lets you decide how to promote new products.  But none of this is to discover the nature of the parent population.  It’s to know more about those customers who care enough to respond.  All such samples are not random; they are biased via self-selection to include mostly the interested, the opinionated, the passionate, and the site-addicted.  All the rest are silent and therefore unknown.  So long as you understand this limitation, it is perfectly fine to invite the "roar of the crowd" from your customers.
    The most spectacular example of deliberate creation of a biased sample is associated with the annual voting culminated in May of 2001 through 2008 on American Idol.  American Idol FAQs explains how to vote once an Idol show is completed.  Voting by voice is done to toll-free numbers, but there’s also the option of text messaging.  The FAQ site says "if you vote using Cingular Wireless Text Messaging, standard Text Messaging fees will apply."  The show is tremendously popular, and voting requires waiting in line, unless the text message option is used.  Cingular does not disallow repeat messaging, for the baldly obvious reason that it charges a fee per message.  Thus FAQ says "input the word VOTE into a new text message on your cell phone and send this message to the 4 digit short number assigned to your contestant of choice (such as 5701 for contestant 1).  Only send the word ‘VOTE’ to the 4 digit numbers you see on screen, you cannot send a text message to the toll-free numbers."  That’s right, there are two separate procedures, one for toll free lines with slow one-at-a-time votes and then slow waits for another crack at it, another for fast repeat voting with fees to Cingular via text messaging.  That’s a positive invitation to creation of a highly biased sample.
    Another sophisticated bad poll is run by former President Clinton’s ex-advisor Dick Morris at Vote.com (URL: ).   Like PulsePoll, Vote.com is professionally presented in hopes of producing enough audience to interest advertisers in subsidizing the site.  The issues are current and interesting.  The site promises all participants that their opinions and votes truly count, since those in power will hear about the poll results.  That might satisfy the millions whom legitimate polls show are alienated from their own government.  But just like PulsePoll and its brethren, this site is irretrievably biased by its failure to do random sampling.  It does just the opposite, by inviting the opinionated to separate themselves from the silent and make their voices heard by those in power.
    All such ugly polls commit gross violations of ethical standards of behavior.  They masquerade as legitimate objective surveys, but then launch into statements designed to prejudice respondents against a specific candidate or policy.  Alongside the Hootie Poll, the web has produced other direct examples for perusal.  The investigative left-wing magazine Mother Jones in 1996 published Tobacco Dole, by Sheila Kaplan.  The target turned out to be former Attorney General of the State of Texas, Dan Morales.  He and others are routinely brought forth as statewide office-holders, but suddenly on Question 24 onward, the true purpose of this query is revealed in a series of relentlessly negative statements about Morales alone.  The reason was that Attorney General Morales at the time was point man for engaging the state in legal action against tobacco firms, and this alleged poll was a response to undermine that goal.
    Hired gun polls are not literally synonymous with advocacy polls, polls used by advocacy groups to promote their viewpoints.  Advocacy polls become very widespread in American politics in the past two decades (Beck, Taylor, Stanger, and Rivlin 1997 at REP16 – Issue Advocacy Advertising During the 1996 Campaign).  Issue advocacy is any communication intended to promote a particular policy or policy-based viewpoint.  Polls can be extremely helpful in doing this persuasively.  There is an important political market for legitimate poll-based issue information.  Advocacy groups often commission a poll to be done and then selectively release that information which furthers their cause.  But usually they do not go further, into the realm of push polling.
    That low response-rate samples invite bias is well known from congressional offices inviting citizen responses to franked mail inquiries.  It mainly draws responses from those who have some knowledge and interest in public affairs and who feel favorably toward that Member of Congress.  In Hite’s case, most knew and cared little about her or her very strongly held opinions on feminism and man-woman relations.  But a few did.  Those divided into persons who liked and shared Hite’s basic views, and those who didn’t.  The friendlies were far more likely to fill out and mail back the survey.  So Hite got a biased sample of Hite supporters.  This is non-response bias:  her sample was stacked with angry and dissatisfied women who were much more likely than the 95.5 percent non-responders to have had affairs outside of marriage and to tell that (Singer in Rubenstein 1995, 133-136; T.W. Smith 1989, Sex Counts: A Methodological Critique of Hite’s "Women and Love", pp.
2 The HIP-Sampling Error site defines Sampling Error as "That part of the total estimation error of a parameter caused by the random nature of the sample" where a Random Sample is "A sample that is arrived at by selecting sample units such that each possible unit has a fixed and determinate probability of selection."  In layman’s terms, this means every sample unit has the same likelihood of being included in the sample, yet there’s still error when making an inference about the population.  A self-selected sample that is not randomly selected from a population has no specification of sampling error–as the term is meaningless in that context.
8 The hired gun poll is succinctly described by Humphrey Taylor, chairman of the Harris Poll in the U.S., with journalist Sally Dawson.  See Public Affairs News – Industry – Polling:  Poll Position (June 2006) and scroll down to "hired gun" polling.  Taylor says "there is a long history of hired-gun polls which are actually designed to mislead people using every methodology.
    As one can see, subtlety is not a long suit at Opinion Center.com.  They borrow from legitimacy of real polls and profess this as their motto:  "Surveys are intended to elicit honest information for academic and consumer-oriented market research & entertainment."  Opinion Center falls alarmingly short of that.  But they do teach us how to recognize bias that is built straight into the questions and available responses.  The professional push polls and hired gun polls are considerably more difficult to smell out–but with a little practice and a skeptical eye, any layperson can get their drifts too.
    The dangers of self-selection may seem obvious by now, yet flagrant violations of random selection have sometimes received polite and promotional treatment in the press.  Shere Hite has made a successful career writing on the habits and mores of modern women.  In 1987 she hit the headlines and made $3 million selling a book based upon a mail survey of 4500 American women derived from a baseline sample of 100,000 women drawn from lists compiled in various women’s magazines.  The highlight was a report that well over half her sample of women married five or more years were having one or more extramarital affairs.  That got Hite oceans of free publicity and celebrity tours.  Yet the Hite 4500 were a heavily self-selected sample who chose to respond to Hite’s invitation to disclose sensitive matters of private and personal beliefs and behavior.  This outraged legitimate surveyors, who know that any "response rate" (percentage of those surveyed who submit to the questions) below 60 percent invites distortion of the sample in favor of the vocal and opinionated few.  A response rate of 4.5 percent clearly will not do.
 That trashes the principle of random selection, where everyone in a target population has the same likelihood of being in the sample.  A proper medical experiment never permits someone to choose whether to receive a medication rather than the placebo.  No; subjects are randomly placed in either the "experimental group" (gets the treatment) or the "control group" (gets the sugar-coated placebo).  If you can call or e-mail yourself into a sample, why would you believe the sample was randomly selected from the population?  It won’t be.  It consists of persons interested enough or perhaps abusive enough to want their voices heard.  Participation feels good, but it is not random selection from the parent population.
    Polls have become indispensable to finding out what people think and how they behave.  They pervade commercial and political life in America.  Poll results are constantly reported by national and local media to a skeptical public.  Seemingly everyone has been contacted by a pollster or someone posing as one.  There is no escape from the flood of information and disinformation from polls.  The internet has enhanced both the use and misuse of such polls.  Any student therefore should be able to reliably tell a good poll from a bad one.  Bad ones are distressingly commonplace on the web.  What is more, bad polls come in two forms.
    The DSS Calculator also permits us to seek different levels of assurance about the sampling error.  We call this "confidence level" or "confidence interval."  Customarily we accept a 95% level, meaning that our 1000 flips will go above or below the 3.1% only 1 time in every 20 samples.  We get 500 heads plus or minus 31 on 19 trials out of 20.  If that isn’t good enough for the cautious, they can select 99% instead, and that produces a larger sampling error (about 4.1%) for a more cautious inference about the mother set of flips; and now we predict 500 heads plus-or-minus 41.  Polls can be custom-fit for different accuracy demands.
 Whatever it is, we finish with a highly biased sample from which one cannot draw valid inferences on those questions about the population of all American women or even from her original 100,000 mail-list.  Low response rate is a well-known pitfall.  Alongside the Hite example, it is one of the many mistakes committed by the infamous Literary Digest polls (Squire 1988; Rubenstein 1995, 63-67).
    But once polls are published, advocate groups rapidly put them to their own uses.  Sometimes they do not show links to the source.  For instance, see Scenic America’s Opinion Polls:  Billboards are Ugly, Intrusive, Uninformative.  This is a typical advocate group site with a report based on several polls saying the American people consistently dislike highway billboards.  But the polls are not linked (although this group does cite them properly at the bottom of their file).  Therefore readers either hunt these down or must take this report’s word for it–and that is never a good idea in dealing with advocate groups!  Advocate groups have a bad habit of selectively reporting only the information that flatters their causes.  That should not be accepted at face value.  It’s best to draw no conclusion at all unless one can access the source information for oneself.
    There are ways to get even with these moral offenders.  Herbert Asher, author of the six-edition polling text Polling and the Public:  What Every Citizen Ought to Know, recommends that citizens who are push-polled should alert their local media of that fact (Asher 2005, 140).  One might also consider self-policing by political consultants via their organization, the American Association of Political Consultants.  However, a 1998 survey of political consultants showed that few believe their organization’s formal stance against push polls is an effective deterrent (Thurber and Dulio 1999, at Reprinted from the July 1999 Issue of Campaigns and Elections Magazine:  A Portrait of the Consulting Industry, p.
3 The 1995 NPTS Courseware Interpreting Estimates – Sampling Error site shows that sampling error follows naturally from drawing out a part of a population for creation of a sample.  In the DSS Calculator, entry of population size of 1000 AND also a sample size of 1000 produces a 0% sampling error, because the entire population went into that sample, so any second sample of 1000 cannot possibly vary from the first one.  That’s true for all finite population and sample sizes, such as the 2004 presidential election voter turnout of about 122,000,000.  But if you enter population of 122,000,000 and sample size of 1220, then you get a manageably small sampling error of about 3%, even though this sample consists of only 1 in every 100,000 voters-to-be from the population.
    But there are apparently some exceptions.  In 2002 the professional golf tour witnessed a political fight which ultimately yielded a hired gun poll that quite deliberately violated all standards enunciated in The Polling Company TM definition.  Chairman and CEO Hootie Johnson of the Augusta Golf Club chose an aggressive counter-campaign to Martha Burk of the National Council of Women’s Organizations, who sought to oblige the Masters’ Golf Tournament’s host club to open its doors to women for the first time.  He hired The Polling Company and WomanTrend, a Washington D.C. polling firm chaired by a prominent Republican woman named Kellyanne Fitzpatrick Conway (the polling company T inc.
    One effect of these slants is to invite skepticism about anyone who addresses hot button political topics.  Students often mistakenly identify polls on controversial subjects to be ugly polls.  This is patently incorrect.  It is perfectly legitimate for good polls to address the most touchy or delicate subjects.  In fact, those are often the things most worthwhile to know and understand.  Content addressing an explosive topic is not itself grounds for sensing "ugly" in a poll.  I recommend studying the legitimate polls to see how two or three of them address such hot-button topics as abortion or gun control.10  Once you see the nature of the wording, compare it to someone who is genuinely trying to sway you instead of learn what your opinions are.  With some practice and alertness, you won’t find it difficult to tell the difference of good from ugly.
    This is a special category of bad poll, reserved for so-called pollsters who deliberately use loaded or unfairly worded questions under disguise of doing an objective survey.  Some of these are done by amateurs, but the most notorious are produced by political professionals.  These include the infamous push polls.  I treat these first.  There are also comparable polls composed of subtle question biases that create a preconceived set of responses.  These fall into the category of hired gun polls.  I treat them second, but not least.
    Remember another rule about sample size.  It does no harm that the sample is extremely small in number compared to the target population.  Consider coin flips as a sample designed to test the inherent fairness of a coin.  There is virtually no limit to number of possible flips of a coin.  You want to know if the coin is fair, meaning that half of all flips will be heads and half tails.  So "all flips" is the population you want to know about.  "Actual flips" are the sample.  You can never know what "all flips" looks like, but that’s OK.  The key to accurate judgment of "all flips" is to make sure you have a large enough sample of actual flips.  Asher (2005, 78) gives a similar example of taking a small proportion of one’s billions of red blood cells to take its profile, or a chef sampling soup before serving it.  Statisticians refer to a law of large numbers, and it’s explained at many sites like The Why Files, Obey the Law.

The main advantage of push polls is that they are an effective way of maligning an opponent ("pushing" voters towards a predetermined point of view) while avoiding direct responsibility for the distorted or false information suggested (but not directly alleged) in the push poll.
In March 2011, the Daily Telegraph reported that the Australian Labor Party (ALP) was referred to the New South Wales (NSW) Electoral Commission after it was alleged to have used "push polling" in Newcastle to discredit independent candidate John Tate.
In the 2008 presidential election, Jewish voters in several states were targeted by various push polls that linked Barack Obama to various anti-Israel positions (mostly false or misinterpreted).
Push polls are generally viewed as a form of negative campaigning.[1] Indeed, the term is (confusingly) commonly used in a broader sense to refer to legitimate polls that aim to test negative political messages[2] Future usage of the term will determine whether the strict or broad definition becomes the most favored definition.
Consequently push polls are most used in elections with fewer voters, such as party primaries, or in close elections where a relatively small change in votes can make the difference between victory or defeat.
However, in all such polls, the pollster asks leading questions or suggestive questions that "push" the interviewee towards adopting an unfavourable response towards the political candidate.
Labor polling firm Fieldworks Market Research admitted to the Telegraph reporter that the script used when calling voters branded Mr Tate a "Labor" candidate, but said the script was provided by the ALP.[7] It is not known, at least in public, whether the Electoral Commission responded to this referral.
A push poll is an interactive marketing technique, most commonly employed during political campaigning, in which an individual or organization attempts to influence or alter the view of voters under the guise of conducting a poll.
In a push poll, large numbers of voters are contacted briefly (often less than 60 seconds), and little or no effort is made to collect and analyze response data.

Poll operates a lot like Facebook Questions (above), but it has some extra options for tracking your respondents, purchasing premium features like ad blocking, and the ability to hide header tabs.
Poll for Facebook is a free service with a slew of options including the ability to include a poll title, introduction text and advanced features such as creating a custom URL and privacy options.
Poll for Facebook is the most customizable and easiest to use of the available options, attracting major corporate users like the Food Network, the Baltimore Ravens and Clarins Paris to the service.

For example, you should avoid asking a series of questions about a free banking service and then question about the most important factors in selecting a bank.
The issues raised in one question can influence how people think about subsequent questions.
Words are often used in different ways by different people; your goal is to write questions that each person will interpret in the same way.
Start the survey with questions that are likely to sound interesting and attract the respondents’ attention.
Voicing questions in the third person can be less threatening than questions voiced in the second question.
Most questionnaires rely on questions with a fixed number of response categories from which respondents select their answers.
Some questions involve concepts that are difficult for many people to understand.
It is good to ask a general question and then ask more specific questions.
After they have completed the survey, brainstorm with them to see if they had problems answering any questions.

Back to the governor’s race for a moment – Democratic candidate Ed FitzGerald says he’ll work to guarantee that all Ohio 4-year-olds have access to public preschool by 2018 if he’s elected governor.  And speaking of preschoolers – a bill has been introduced in the House that would require kids in day cares to be vaccinated for preventable diseases such as mumps and measles.
In a second Quinnipiac poll of Ohio voters, Kasich outperformed all other widely discussed Republican candidates for president in a 2016 race against potential Democratic candidate Hillary Clinton, but the former Secretary of State still beats any one of the possible Republican candidates.

is a dynamic discussion community where you can discover, debate & discuss issues that get you fired up.

Over the last 60 years, poll questions that asked people which candidate they expected to win have been a better guide to the outcome of the presidential race than questions asking people whom they planned to vote for, the study found.
Wolfers disagreed, saying he thought that voters’ predictions were based mostly on friends, yard signs and other private information, given that responses to the expectations questions varied so much.
With response rates to polls having fallen sharply in recent years, thanks to mobile phones, caller identification and a rise in phone solicitation, expectations questions have the potential of effectively increasing a survey’s sample.
On average, about 70 percent of people predict that their preferred candidate will win; if a poll only of Democrats found that 60 percent expected their candidate to win, that would suggest the Republican was the favorite.
But another kind of polling question, which received far less attention, produced a clearer result: Regardless of whom they supported, which candidate did people expect to win? Americans consistently, and correctly, said that they thought Mr.
In the last three weeks, polls — including by ABC/Washington Post, Gallup, Politico/George Washington University and Quinnipiac University/New York Times/CBS — have consistently found that more Americans expect President Obama to win than expect Mr.
Frank Newport, editor in chief of Gallup, said he was intrigued enough by the paper to have talked with the authors about how to include expectations questions in more polls.

Users can answer Facebook Questions — but even a year after releasing the feature, Facebook hasn’t bothered to give mobile users the ability to ask questions.
So no matter how confidently someone tells you they know how EdgeRank works, if their name-tag doesn’t say Facebook Engineer, don’t freaking trust ‘em.
But what the social media “gurus” rarely tell you is that while Facebook provides the equation, it doesn’t provide the values of the variables.
Polls — or “Questions” as Facebook calls them — shouldn’t be used unless they end up on the “Demands” list of a hostage negotiation.
If someone comes to my door and claims Facebook “Question” polls are useful, I will punch them in the mouth until I hear the words in reverse.
In addition to anecdotal evidence from users, there are 2 main reasons I can offer for why Facebook polls are kinda crappy.
When Facebook Questions launched a little over a year ago, bloggers like myself expected it to be community engagement crack cocaine.
On September 13th, Battlefield hosted a Facebook poll which received 7577 total responses.
I’ve covered before how Facebook uses EdgeRank to decide which items they show in your news feed.

You’re using a web browser that isn’t supported by Facebook.

18 November 2002 Barring the ludicrous Jason movies, comic book films, and the like, what’s the most unbelievable scene where a human villain is still alive after suffering what should be a mortal blow? P.S.: Multiple Spoilers Ahead!! (Did we miss your favorite unstoppable baddie? Write us at poll@imdb.com and make yourself feel better.
19 November 2002 Barring comic book films and the like, what’s the most unbelievable scene where a human protagonist is still alive after suffering what should be a mortal blow or beating? P.S.: Multiple Spoilers Ahead!! (Did we miss your favorite unstoppable hero? Write us at poll@imdb.com and make yourself feel better.
22 February 2002 Which movie were you most sorry to see completely shut out of this year’s Oscar race? (Courtesy Matt B.
1 July 2002 Which movie had the best scene with an egg or with eggs? (Did we miss your favorite egg scene? Write us at poll@imdb.com and make yourself feel better.
17 February 2002 "Other" was the most popular response to our question: "Which film, that you were really keyed up and effusive about just a few years ago, embarrasses you now the most?" Some of you wrote in and suggested a few particular films so we’re rerunning the poll with them included.
13 March 2003 Which 2002 release should have fared much better in this year’s Oscar race overall? (Courtesy of Matt B.
29 September 2004 Only 100 votes or so separated the top four finalists in our "What recent movie star’s name do you most enjoy saying?" poll last week, so we’re doing a run-off.
20 February 2002 What do you dislike the most about watching your favorite movie on network TV? (Courtesy Chad B.
14 January 2002 If I never have to watch another person [blank] in a movie again, I’ll be happy.? (Did you vote for "Other?" Send your [blank] to poll@imdb.com and perhaps we’ll rerun this.
2 April 2002 We ran this poll last December but it bears repeating: "Tough question today.
25 September 2002 Lots of responses to the poll: What popular song used in a movie (but not originally from that movie) causes you to flash back to that movie whenever you hear it? Here’s the results of that poll but we wanted to add the most popular suggestions and try again.
31 March 2009 Cinematical recently offered their list of seven women who should be Bond Girls; rounding the number up to ten, we’ve added three women from some of the top-grossing films of last year.
9 July 2002 The first MIB was the #2 movie for 1997 (by release year, not calendar year).
6 June 2002 Which movie has the most convincing scene where a character vomits/retches from pure guilt? (Courtesy Adam K.
27 February 2002 Julia Roberts said, "I cannot absorb living in a world where I have an Oscar for Best Actress and Denzel doesn’t have one for Best Actor"? What’s your response to that statement? (Courtesy Lee B.
8 February 2006 Which movie do you think should have been nominated for Best Picture this year? (Suggested by James A.
9 January 2002 January is known as one of the two standard times in the movie year for the studios to dump bad product.
11 November 2002 Sight and Sound magazine conducted a critics’ poll for the best film of the past 25 years.
14 November 2002 Sight and Sound magazine conducted a critics’ poll for the best film of the past 25 years.
13 November 2002 You had such great suggestions for "What actor’s appearance was so out of place that it nearly ruined a very good film for you?" that we’re rerunning it, though keeping a few of the top vote recipients and relaxing our "very good film" stipulation.
7 January 2002 The American Film Institute awards occurred last night.
16 December 2009 Last week Bret McKenzie and Jemaine Clement of "Flight of the Conchords" announced they would not be making a third season of their HBO comedy.
8 March 2002 It’s beginning to feel like the Academy’s Best Song nominations are geared more to attract really cool pop/rock stars/legends than as some recognition of how great the song from its respective film was.
8 November 2001 If you were taking your child and/or some other young relative to the movies, which company’s banner would give you the greatest feeling of comfort that you were probably going to see a great family movie? (Yes, we realize that some of these companies actually feed into the slates of the studios also listed, just go with us.
23 February 2002 Disney’s been making sequels to the classics that were created under Walt’s watch.
2 February 2010 This year’s Razzie nominees were just announced, and the "Worst Screen Couple for 2009" is typically mean/funny.
26 February 2002 Disney’s been making sequels to the classics that were created under Walt’s watch.


<br /> Error 500: Internal server error<br />

Oops, the server is a bit sick

We’re sorry, but the server seems to be sick, so it has some problems showing up your page. We’ll try to get it for you as soon as possible. The page should refresh in 5 seconds, if it doesn’t click here.

You can try our home page and see if you can find it from there.

If you wish to send us your feedback, try the contact form.


Tags: , ,