On a daily basis many of us are inundated with survey research for one reason or another. Whether is be a simple social media poll, online banking experience or a government approval rating, many of us have these survey blend into our daily lives without really noticing. To understand how or why survey research is relevant, we must use a working definition to better understand the goal and uses of surveys. In its earliest forms, surveys took the form of a census. At its earliest time, surveys were intended to gather data on individuals which was collected primarily for reasons of taxation and military service (Andres, 2012,).
For this critical inquiry into survey research we have chosen a transparent definition which encompasses the goals of survey research “The purpose of the survey is to produce statistics – that is quantitative or numerical descriptions of some aspects of the population” (Fowler, 2009). This does not mean that survey research is strictly quantitative, but encompasses, qualitative, quantitative, and mixed methods.
Mapping out and conceptualizing your study
As survey research is multifaceted and requires the researcher to possess theoretical expertise, a grounded understanding of sampling theory, ethical considerations, the ability to provide compelling and clear questions, and a flare for graphic design (Andres 2012).
To create an effective survey, the research must locate themselves in relation to the survey. As the literature tells us, the ‘objective researcher’ is simply not possible. Power dynamics are considerations that the research must address when creating a survey.
Creating a mind map, or conceptual framework of how the research is going to be carried out. How is the data going to be analyzed (SPSS, NVIVO) is it a qualitative, quantitative or mixed methods approach?
An important question to consider when looking at survey as a methodology is to ask “What differentiates survey from other kind of research?” The main difference between survey and other kind of research lies in that survey “ask the questions which the researcher wants answered, and often they dictate the range of answers that may be given” (Sapsford,1999,p.4).
Andres (2012) and Sapford (1999) both seem to echo similar types of considerations in that research questions should be narrowed down. The following are important aspects of the research questions that must be thought of when research questions are being formed.
- What’s the problem?
- What kind of answer am I looking for?
- What kind of an argument might lead from the question to the answer?
- What kind of evidence might lead from the question to the answer?
- What kind of evidence will I need to sustain this kind of argument?
- How is this kind of evidence to be collected, and from/about whom or what?
- How shall I demonstrate to the reader that the evidence is valid?
O’Leary, Andres, and Sapford all agree that one of the best methods in terms of assuring that one’s data is reliable and standardized is to pilot it. Pilot work should be useful in terms making sure that the “measure produce consistent results.” (Sapsford, 62) This is where validity comes in, when piloting, one should make sure that special attention is given to the way questions are framed. One technique during piloting is to ask the same question in various ways, and “talk to respondents afterwards about what they thought about when they were answering, to try to obtain the most useful answers (Sapsford, 1999, p.108).” This is allow the research an assessment of his/her research questions and be able to make a decision, based on the piloting results of questions should remain the same, be changed, or completely omitted.
As Lesley Andres (2012) points out identifying a sample is key to making the survey relevant and accurate. First one must ask themselves Who can best answer the research questions posed in the study? All studies regardless of their size must address the following in identifying their sample:
- Target Population
- Sampling Frame
- Survey Sample
A sample is a small group drawn from a larger group. In technical terms, this larger group is called a population (Punch, 2003). According to Fowler (2009), sample frame, sample size, and the design of selecting individuals can determine “how well a sample represents a population.”
According to Fowler (2009), researchers should evaluate a sample frame based on three factors:
- The probability of selection is known
- Cost effectiveness
According to Statistics Canada (2013), after determining the sample frame, researchers use probability or nonprobability sampling techniques to draw samples. Probability sampling include Random, Systematic, Stratified, and Cluster Sampling techniques.
Simple Random Sampling:
This basic sampling method gives individuals (of a population) an equal chance of being selected. For example, the goal is to choose 25 students from a group of 250 students in an undergraduate psychology class. This means that 25 students will be randomly chosen out of a hat.
Systematic Sampling can be best described by using the same example as above. If the goal is to choose 25 students from a group of 250 students, researchers would pick the first subject randomly and then select every ninth individual from the list of students. This ninth individual is derived by dividing the total number of students by the number of students to be selected. For example: 250/25=10. Therefore, in this sampling method if the researcher decides to select the 8th student from the list as their first participant, every 10th student after that (18th, 28th, 38th, etc.) will be chosen as the next participant until all 25 participants have been selected. Researchers may also choose to use the systematic modified sampling method where the sampling fraction, as discussed above, is used constantly to select participants.
Stratified sampling is another type of probability sampling where the researcher divides the population into smaller groups, called strata. This sampling method is beneficial when the researcher has a large population. For example, if the researcher is planning to conduct a study on graduate students at UBC, they would divide the population of students into smaller groups. The researcher could do this by dividing the groups in terms of their areas of study, their years of study, or any other criteria they decide upon.
Cluster Sampling is similar to stratified sampling where the researcher divides the population into groups or clusters. Unlike stratified sampling, the researcher doesn’t select participants from all the clusters. This sampling technique is used to reduce costs by studying selective clusters only.
According to Statistics Canada (2013), researchers or statisticians also use non-probability sampling techniques. Some of the common ones are listed below:
- Convenience Sampling
- Judgment Sampling
- Quota Sampling
- Volunteer Sampling
Once the participants are selected, researchers can discuss how they would like to gather information about their research questions. Some of the most common instruments include conducting interviews, questionnaires (O’Leary, 2014), and focus group and panel groups (Statistics Canada, 2013).
How to administer
Once the researcher has prepared their questions and selected their survey instrument, they can use few ways to administer their survey. According to O’Leary (2014), Fowler (2009) & Punch (2003), surveys can be administered in different ways.
Self-administered via mail
Online or email
Personally administered at home, or a public place
They can be administered in a group setting as well
Some things to keep in mind when conducting a survey: (Especially if using sample)
The time and date matters greatly in surveys because they can change the kind of data we get. For example, if our research question relates to drinking and, our sampling is taking place at a time when the pubs are closed, we are likely to not get accurate data. Similarly, as mentioned earlier, standardization is a key in survey research, as Sapsford, 1999, states “if measures are not standardized they are of no value in survey (p.107).”
Sapsford, (1999) provide some of the areas that need to looked at in terms of synthesizing if the data reported in the survey is valid. These include:
- Validity of measurement- the extent to which the data constitute accurate measurements of what is supposed to be being measured;
- population validity- the extent to which the sample gives an accurate representation of the population which it is supposed to represent;
- validity of design- the extent to which the comparisons being made are appropriate to establish the argument which rests on them.
Lesley Andres (2012) suggest that when collecting and analyzing data in large or small quantities, two specific software are suggested. SPSS Statistics is a software package that is used for quantitative data analysis for which a Code Book can be made. Andres suggests that students should be familiar with this software package to they can accurately analyze quantitative data.
Nvivo is the qualitative data package that Andres suggests would be best for analyzing data. Nvivo can be particularly helpful when summarizing interviews, or recorded telephone conversations. Andres (2012) points out that students should be familiar with both software packages as often surveys are both qualitative and quantitative in nature and would give the research more tools in their toolkit.
Surveys are multifaceted and take many forms. Surveys can be an effective tool for researchers and offer many different types of survey to meet the needs of researchers. However, when choosing survey as a methodology, one must be precise and one what is it exactly that he/she wishes to find out. Meaningful results can influence policy and best practices with the goal of influencing the lives of individuals. The purpose of survey research is to ask specific questions to the right group of people to elicit meaningful responses which will add to the body of research for a specific topic area (Andres 2012). Although, it may surprise many, surveys also have been used in social justice arena to raise important questions and concerns relating to issues of discrimination, diversity, and human rights.
Andres, L. (2012). Designing & doing survey research. Thousand Oaks, Calif; London: SAGE.
Fowler, F. J. (2009). Survey research methods (4th ed.). Los Angeles: Sage.
O’Leary, Z. (2014). The essential guide to doing your research project (2nd ed.). London: SAGE.
Punch, K. (2003). Survey research: The basics. Thousand Oaks, Calif;London;: Sage Publications.
Sapsford, R. (1999). Survey research. London: Sage.
Statistics Canada. Statistics! Power from data (2013). Retrieved from http://www.statcan.gc.ca/edu/power-pouvoir/ch13/nonprob/5214898-eng.htm