We can work on Recruiting research participants has become more expensive

Over the past 50 years, recruiting research participants has become more expensive. Changes in technology have contributed to this problem. According to the Centers for Disease Control and Prevention, in 2022, only 27 percent of adults lived in a household with a landline telephone, compared to more than 90 percent in 2004 (Blumberg and Luke, 2023). However, commodifying personal data has contributed even more significantly to the problem. Data brokers constantly collect information about individuals to sell to companies. How often have you experienced robocalls, a spam text or email, or even being stopped on the street by someone to answer a few questions? While some of these inquiries are for legitimate research purposes, others are designed to sell goods and services, and some are outright scams. Over time people have become harder to reach and more suspicious of invitations to participate in research projects.

One innovative solution to recruiting large and diverse nonprobability samples is Amazon’s Mechanical Turk (MTurk). Launched in 2005, MTurk is a crowdsourcing marketplace where researchers can hire individuals (Turkers) to complete human intelligence tasks (HITs), such as surveys. Using MTurk has become so popular that the Journal of Management commissioned a review of the platform (Aguines, Villamor, and Ramani, 2021). The main benefits of MTurk are the low cost and ease of obtaining large and diverse samples of participants as well as the ability to use a variety of research designs, including experimental and longitudinal designs.

The problems with using MTurk are associated with internal and external validity threats. For example, in their study, Herman Aguinis, Isabel Villamor, and Ravi Ramani (2021) identified inattention, high attrition rates, inconsistent English language fluency, and non-naivete (i.e., exposure to the topic more than once) as challenges. These are potential threats to internal validity. Remember from Chapter 4 that interval validity threats challenge the causal statement about the observed covariations between variables. The authors also identified workers misrepresenting their self-reported sociodemographic characteristics and self-selection bias as challenges. These problems are potential threats to external validity. That is, are the cause-and-effect findings of the study generalizable to other groups? Another potential challenge is the use of bots or computer programs that auto-complete HITs and server farms to bypass location restrictions (Chmielewski and Kucher, 2020).

MTurk has great potential but also some challenges. The platform allows researchers to reach individuals with specific characteristics, such as age, race/ethnicity, and educational attainment. Moreover, it enables researchers to recruit participants who have had contact with the criminal justice system, work in specific occupations, and live in particular countries. Research suggests that MTurk samples can be more representative of the general population than samples of college students (Goodman, Cryder, and Cheema, 2013). However, research also indicates that certain population groups are overrepresented, including females, Whites, college-educated individuals, liberals, and young people (Levay et al., 2016). Accordingly, scholars note that certain precautions should be taken when using systems like MTurk. For example, Aguinis and colleagues (2021) recommend that researchers use multiple validity checks, such as CAPTCHAs, honeypots (computer code invisible to people), and attention checks. Furthermore, researchers should cross-check workers’ profiles and monitor their average response times.

Consider how the spread of Internet access has reduced potential problems with biases associated with online samples. Also, think about how such tools make it possible to produce large samples cheaply. Criminal justice researchers are only beginning to study how these crowdsourced opt-in samples should be used. You can review research by Thompson and Pickett (2020) for more information. We return to MTurk in Chapter 9, showing how MTurk samples can be coupled with online survey platforms. In the meantime, you can read more about MTurk through the link below.

Critical Thinking

What justice-focused topics can be studied using crowdsourced opt-in platforms like Amazon’s MTurk? What topics might not be appropriate to explore using these platforms for recruiting participants?
Based on the challenges described above, can you provide an example of how these challenges might affect your ability to produce a sample representative of the population if you wanted to study topics such as environmental justice, or immigration, or support for police reform?

find the cost of your paper
facebookShare on Facebook

TwitterTweet

FollowFollow us

Sample Answer

 

 

 

 

Critical Thinking: Justice-Focused Topics and MTurk

Here are some thoughts on the justice-focused topics that could be studied using crowdsourced opt-in platforms like Amazon’s MTurk, as well as those that might not be appropriate, and how the described challenges could affect representative sampling for specific justice topics:

What justice-focused topics can be studied using crowdsourced opt-in platforms like Amazon’s MTurk?

MTurk’s ability to reach large and diverse (though not perfectly representative) samples at a relatively low cost makes it suitable for studying a variety of justice-focused topics, particularly those that involve attitudes, perceptions, and hypothetical scenarios. Examples include:

  • Public Perceptions of Crime and Justice: Researchers could survey participants on their views regarding the severity of different crimes, their support for various sentencing options, or their perceptions of the fairness of the criminal justice system.
  • Attitudes Towards Law Enforcement: Studies could explore public opinions on police legitimacy, trust in police, support for different policing strategies (e.g., community policing, stop-and-frisk), and perceptions of police bias.

Full Answer Section

 

 

 

 

 

  • Support for Criminal Justice Reform: Researchers could gauge public support for specific reform proposals, such as changes to sentencing laws, diversion programs, or investments in rehabilitation.
  • Victimization Experiences (Non-Sensitive): General questions about experiences with minor forms of victimization or perceptions of safety in their communities could be explored. However, extreme caution and ethical review are necessary for more sensitive victimization topics.
  • Moral Judgments and Ethical Dilemmas in Justice Contexts: Hypothetical scenarios involving legal or ethical dilemmas within the justice system (e.g., jury decision-making, prosecutorial discretion) could be presented to participants to understand their moral reasoning.
  • Understanding of Legal Concepts: Researchers could assess public understanding of legal terms, rights, and procedures.
  • Impact of Media on Justice Perceptions: Studies could examine how media portrayals of crime and the justice system influence public attitudes.
  • Support for Rehabilitation and Reentry Programs: Researchers could gauge public willingness to support programs aimed at helping formerly incarcerated individuals reintegrate into society.
  • Perceptions of Procedural Justice: Studies could explore the importance individuals place on fair processes in legal settings, even when outcomes are unfavorable.

What topics might not be appropriate to explore using these platforms for recruiting participants?

Due to the potential for lack of representativeness, inattention, and misrepresentation, certain justice-focused topics might not be appropriate or require extreme caution when using MTurk:

  • Sensitive Personal Experiences with the Criminal Justice System: Individuals who have been directly involved in the criminal justice system (e.g., incarcerated individuals, victims of serious crime) might be underrepresented or unwilling to disclose such experiences on an online platform with potential privacy concerns.
  • In-depth Exploration of Traumatic Experiences: Detailed accounts of victimization or trauma require sensitive and ethical research protocols that may be difficult to ensure in a crowdsourced online environment.
  • Studies Requiring High Levels of Naivete: If the research topic is likely to have been encountered or discussed extensively online, the issue of non-naivete could severely compromise the results.
  • Topics Requiring Specific, Hard-to-Verify Demographics: While MTurk allows for demographic targeting, the self-reported nature and potential for misrepresentation can be problematic for studies requiring very precise and verifiable demographic characteristics (e.g., specific immigration statuses that individuals might be hesitant to disclose).
  • Research Involving Highly Stigmatized Behaviors or Beliefs: Participants might be less likely to provide honest responses about socially undesirable or stigmatized topics in an online setting, even with assurances of anonymity.

Based on the challenges described, can you provide an example of how these challenges might affect your ability to produce a sample representative of the population if you wanted to study topics such as environmental justice, or immigration, or support for police reform?

Here are examples of how the challenges of MTurk could affect representative sampling for the specified topics:

  • Environmental Justice:

    • Representativeness: Research indicates an overrepresentation of Whites and college-educated individuals on MTurk. This could lead to an undersampling of marginalized communities and people of color, who are often disproportionately affected by environmental injustices. The perspectives and experiences of these underrepresented groups might be crucial for understanding the full scope of the issue and public support for relevant policies.
    • Self-Selection Bias: Individuals who are already aware of and concerned about environmental justice issues might be more likely to participate in a study on the topic, leading to an overestimation of public support or awareness compared to the general population.
    • Inattention: Participants who are not genuinely engaged with the topic might provide superficial or biased responses, affecting the validity of the findings regarding the nuances of environmental justice concerns across different communities.
  • Immigration:

    • Representativeness: The overrepresentation of liberals on MTurk could skew the results on support for immigration policies, potentially overestimating support compared to the broader population. Furthermore, individuals with direct experience as immigrants or those from communities with high immigrant populations might be underrepresented due to demographic biases on the platform and potential language barriers (despite attempts to filter by English fluency, comprehension can vary).
    • Misrepresentation of Sociodemographic Characteristics: Individuals might misrepresent their ethnicity or immigration status due to privacy concerns or a lack of trust in the platform, making it difficult to obtain an accurate representation of different immigrant communities’ views or the views of those living in areas with significant immigration.
    • Non-Naivete: The topic of immigration is frequently discussed in online forums and media. Participants might have been exposed to similar surveys or discussions, influencing their responses and potentially reducing the spontaneity and genuineness of their opinions.
  • Support for Police Reform:

    • Representativeness: The demographic biases on MTurk (e.g., overrepresentation of Whites and liberals, potential underrepresentation of certain racial/ethnic minority groups who may have different experiences with policing) could significantly skew the results on support for police reform. The lived experiences and perspectives of those most impacted by police practices might not be adequately captured.
    • Self-Selection Bias: Individuals with strong pre-existing opinions on police reform (either strongly for or against) might be more motivated to participate in such a study, leading to an overpolarization of the sample and not reflecting the views of the more moderate or less engaged segments of the population.
    • Inconsistent English Language Fluency: Nuanced questions about specific police reforms might be misinterpreted by individuals with lower English language fluency, leading to inaccurate responses.

In conclusion, while MTurk offers advantages in terms of cost and sample size, researchers studying justice-focused topics like environmental justice, immigration, or support for police reform must be acutely aware of its inherent limitations regarding representativeness, potential biases, and data quality. Employing the recommended validity checks and carefully considering the specific research question and target population are crucial steps to mitigate these challenges and interpret findings with appropriate caution regarding generalizability. Combining MTurk with other sampling methods might be necessary to achieve a more representative understanding of these complex social justice issues.

This question has been answered.

Get Answer

Is this question part of your Assignment?

We can help

Our aim is to help you get A+ grades on your Coursework.

We handle assignments in a multiplicity of subject areas including Admission Essays, General Essays, Case Studies, Coursework, Dissertations, Editing, Research Papers, and Research proposals

Header Button Label: Get Started NowGet Started Header Button Label: View writing samplesView writing samples