Introduction
A critical part of social research is the decision as to what will be observed
and what will not. It is often impractical or even impossible to survey or observe
every element of interest. Sampling methodology provides
guidelines for choosing from a population some smaller group that represents the
population’s important characteristics. There are two general approaches to
selecting samples: probability and nonprobability sampling.
Probability sampling techniques allow researchers to select relatively few
elements and generalize from these sample elements to the much larger population.
For example, before the 1984 US presidential election, George Gallup’s poll
correctly predicted that the popular vote would split 59 percent to 41 percent in
favor of Ronald Reagan. This accurate prediction was based on the stated voting
intentions of a tiny fraction—less than 0.01 percent—of the 92.5 million people
who voted in the election. Accuracy was possible because Gallup used probability
sampling techniques to choose a sample that was representative of the general
population. A sample is representative of the population from which it is chosen
if the aggregate characteristics of the sample closely approximate those same
aggregate characteristics in the population. Samples, however, need not be
representative in all respects; representativeness is limited to those
characteristics that are relevant to the substantive interests of the study. The
most widely used probability sampling methods are simple random sampling,
systematic sampling with a random start, stratified sampling, and multistage
cluster sampling.
Nonprobability sampling methods, such as purposive, convenience, and quota sampling, do not ensure a representative sample. These samples are not useful for drawing conclusions about the population because there is no way to measure the sampling error. Purposive and convenience sampling allow the researcher to choose samples that fit his or her particular interest or convenience; quota sampling aims to generate a representative sample by developing a complex sampling frame (a quota matrix) that divides the population into relevant subclasses. Aside from being cumbersome, however, the nonrandom selection of samples from each cell of the quota matrix decreases the likelihood of generating a representative sample.
Probability
theory is based on random selection procedures and assumes
three things: that each random sample drawn from a population provides an estimate
of the true population parameter, that multiple random samples drawn from the same
population will yield statistics that cluster around the true population value in
a predictable way, and that it is possible to calculate the sampling error
associated with any one sample. The magnitude of sampling error associated with
any random sample is a function of two variables: the homogeneity of the
population from which the random sample is drawn and the sample’s size. A more
homogeneous parent population will have a smaller sampling error associated with a
given random sample. Moreover, sampling error declines as the size of one’s random
sample increases, since larger samples are more likely than smaller ones to
capture a representative portion of the parent population. In fact, for small
populations (less than fifty members), it is often best to collect data on the
entire population rather than use a sample because this often improves the
reliability and credibility of the data.
Formulating the Sample
When sampling is necessary, it is essential that the researcher first consider the quality of the sampling frame. A sampling frame is the list or quasi list of elements from which a probability sample is selected. Often, sampling frames do not truly include all of the elements that their names might imply. For example, telephone directories are often taken to be a listing of a city’s population. There are several defects in this reasoning, but the major one involves a social-class bias. Poor people are less likely to have telephones; therefore, a telephone directory sample is likely to have a middle- and upper-class bias. To generalize to the population composing the sampling frame, it is necessary for all of the elements to have equal representation in the frame. Elements that occur more than once will have a greater probability of selection, and the overall sample will overrepresent those elements.
Regardless of how carefully the researcher chooses a sampling frame and a
representative sample from it, sample values are only approximations of population
parameters. Probability theory enables the researcher to estimate how far the
sample statistic is likely to diverge from population values, using two key
indices called confidence levels and confidence intervals. Both of these are
calculated by mathematical procedures that can be found in any basic statistics
book.
A confidence level specifies how confident the researcher can be that the statistics are reliable estimates of population parameters, and a confidence interval stipulates how far the population parameters might be expected to deviate from sample values. For example, in the 1984 presidential election, The Washington Post polled a sample of 8,969 registered voters; based on their responses, the newspaper reported that 57 percent of the vote would go to Ronald Reagan and 39 percent would go to Walter Mondale. The poll in The Washington Post had a confidence level of 95 percent, and its confidence interval was plus or minus three percentage points. This means that pollsters could be 95 percent confident that Reagan’s share of the 92.5 million popular votes would range between 54 percent and 60 percent, while Mondale’s vote would vary between 36 percent and 42 percent. When reporting predictions based on probability sampling, the researcher should always report the confidence level and confidence interval associated with the sample.
Sampling Techniques
A basic principle of probability sampling is that a sample will be representative of the population from which it is selected if all members of the population have an equal chance of being selected in the sample. Flipping a coin is the most frequently cited example: The “selection” of a head or a tail is independent of previous selections of heads or tails. Instead of flipping a coin, however, researchers usually use a table of random numbers.
A simple random sample may be generated by assigning consecutive numbers to the
elements in a sampling frame, generating a list of random numbers equal to one’s
desired sample size and selecting from the sampling frame all elements having
assigned numbers that correspond to one’s list of random numbers. This is the
basic sampling method assumed in survey statistical computations, but it is seldom
used in practice because it is often cumbersome and inefficient. For that reason,
researchers usually prefer systematic sampling with a random start. This approach,
under appropriate circumstances, can generate equally representative samples with
relative ease.
A systematic sample with a random start is generated by selecting every element
of a certain number (for example, every fifth element) listed in a sampling frame.
Thus, a systematic sample of one hundred can be derived from a sampling frame
containing one thousand elements by selecting every tenth element in the frame. To
ensure against any possible human bias, the first element should be
chosen at random. Although systematic sampling is relatively uncomplicated, it
yields samples that are highly representative of the populations from which they
are drawn. The researcher should be alert, however, to the potential systematic
sampling problem called sampling frame periodicity, which does not affect simple
random methods. If the sampling frame is arranged in a cyclical pattern that
coincides with the sampling interval, a grossly biased sample may be drawn.
Sampling Frame Periodicity
American sociologist Earl Babbie described a study of soldiers that illustrates
how sampling frame periodicity can produce seriously unrepresentative systematic
samples. He reports that the researchers used unit rosters as sampling frames and
selected every tenth soldier for the study. The rosters, however, were arranged by
squads containing ten members each, and squad members were listed by rank, with
sergeants first, followed by corporals and privates. Because this cyclical
arrangement coincided with the ten-element sampling interval, the resulting sample
contained only sergeants.
Sampling frame periodicity, although a serious threat to sampling
validity, can be avoided if researchers carefully study the
sampling frame for evidence of periodicity. Periodicity can be corrected by
randomizing the entire list before sampling from it or by drawing a simple random
sample from within each cyclical portion of the frame.
The third method of probability sampling, stratified sampling, is not an alternative to systematic sampling or simple random sampling; rather, it represents a modified framework within which the two methods are used. Instead of sampling from a total population as simple and systematic methods do, stratified sampling organizes a population into homogeneous subsets and selects elements from each subset, using either systematic or simple random procedures. To generate a stratified sample, the researcher begins by specifying the population subgroups, or stratification variables, that are to be represented in a sample. After stipulating these variables, the researcher divides all sampling frame elements into homogeneous subsets representing a saturated mix of relevant stratification characteristics. Once the population has been stratified, a researcher uses either simple random sampling or systematic sampling with a random start to generate a representative sample from the elements falling within each subgroup. Stratified sampling methods can generate a highly useful sample of any well-defined population and may have a smaller sampling error than any other sampling method.
Comprehensive Sampling
Simple random sampling, systematic sampling, and stratified sampling are reasonably simple procedures for sampling from lists of elements. If one wishes to sample from a very large population, however, such as all university students in the United States, a comprehensive sampling frame may not be available. In this case, a modified sampling method, called multistage cluster sampling, is appropriate. It begins with the systematic or simple random selection of subgroups or clusters within a population, followed by a systematic or simple random selection of elements within each selected cluster. For example, if a researcher were interested in the population of all university students in the United States, it would be possible to create a list of all the universities, then sample them using either stratified or systematic sampling procedures. Next, the researcher could obtain lists of students from each of the sample universities; each of those lists would then be sampled to provide the final list of university students for study.
Multistage cluster sampling is an efficient method of sampling a very large population, but the price of that efficiency is a less accurate sample. Although a simple random sample drawn from a population list is subject to a single sampling error, a two-stage cluster sample is subject to two sampling errors. The best way to avoid this problem is to maximize the number of clusters selected while decreasing the number of elements within each cluster.
Statistical Theory
As statistician Raymond Jessen pointed out, the theory of sampling is probably
one of the oldest branches of statistical theory. It has only been since the early twentieth century, however, that
there has been much progress in applying that theory to, and developing a new
theory for, statistical surveys. One of the earliest applications for sampling was
in political polling, perhaps because this area provides researchers with the
opportunity to discover the accuracy of their estimates fairly quickly. This area
has also been useful in detecting errors in sampling methods. For example, in
1936, the Literary Digest, which had been accurate in predicting
the winners of the US presidential elections since 1920, inaccurately predicted
that Republican contender Alfred Landon would win 57 percent of the vote over
incumbent President Franklin D. Roosevelt’s 43 percent. The Literary
Digest’s mistake was an unrepresentative sampling frame consisting of
telephone directories and automobile registration lists. This frame resulted in a
disproportionately wealthy sample, excluding poor people who predominantly favored
Roosevelt’s New Deal recovery programs. This emphasized to researchers that a
representative sampling frame was crucial if the sample were to be valid.
In the 1940s, the US Bureau of the Census developed unequal probability
sampling theory, and area-probability sampling methods became widely used and
sophisticated in both theory and practice. The 1945 census of agriculture in the
United States was collected in part on a sample, and the 1950 census of population
made extensive use of built-in samples to increase its accuracy and reduce
costs.
One of the most important advances for sampling techniques has been increasingly sophisticated computer technology. For example, once the sampling frame is entered into the computer, a simple random sample can be selected automatically. In the future, computer technology, coupled with increasingly efficient and accurate information-gathering technology, will enable researchers to select samples that more accurately represent the population.
Sampling techniques are essential for researchers in psychology. Without relying on sampling as the basis for collecting evaluative data, the risk and cost involved with adopting new methods of treatment would be difficult to justify. Evaluating the effectiveness of new programs would be prohibitive, and some populations are so large and dispersed that observing each element is impossible.
Probability sampling is the most effective method for the selection of study
elements in the field of psychology for two reasons. First, it avoids conscious or
unconscious biases
in element selection on the part of the researcher. If all elements in the
population have an equal chance of selection, there is an excellent chance that a
sample so selected will closely represent the population of all elements. Second,
probability sampling permits estimates of sampling error. Although no probability
sample will be perfectly representative in all respects, controlled selection
methods permit the researcher to estimate the degree of expected error in that
regard.
Bibliography
Babbie, Earl R.
The Practice of Social Research. 13th ed. Belmont:
Wadsworth, 2012. Print.
Blalock, Hubert M.,
Jr. Social Statistics. 2nd ed. New York: McGraw-Hill, 1981.
Print.
Henry, Gary T.
Practical Sampling. Newbury Park: Sage, 1998.
Print.
Jessen, Raymond
James. Statistical Survey Techniques. New York: Wiley,
1978. Print.
Kish, Leslie.
Survey Sampling. 1965. Reprint. New York: Wiley, 1995.
Print.
Lohr, Sharon L.
Sampling: Design and Analysis. Pacific Grove: Duxbury,
2009. Print.
Panik, Michael J. Statistical
Inference: A Short Course. Hoboken: Wiley, 2012.
Print.
Thompson, Steven K.
Sampling. 3rd ed. Hoboken: Wiley, 2012.
Print.
Uprichard, Emma. "Sampling: Bridging
Probability and Non-Probability Designs." International Journal of
Social Research Methodology 16.1 (2013): 1–11. Print.
No comments:
Post a Comment