I’ll start by saying that I think Amazon Mechanical Turk (MTurk) and online markets offer no less than a revolution in experimental psychology. By now, I’ve already conducted over a hundred experiments on MTurk and have come to consider it as one of the most important tools available to me. Together with Qualtrics (see previous posts with tips – 1, 2, 3) MTurk is a very powerful tool for very quick and inexpensive data collection. You don’t have to take my word for it, take it from those who know something. There are lots of high-profile articles popping up in various journals across all domains that have come to the same conclusion as I have – MTurk is an important tool. The following examples were chosen from psychology, management, economics, and even biology :
Social Psychology
Findings indicate that: (a) MTurk participants are slightly more representative of the U.S. population than are standard Internet samples and are significantly more diverse than typical American college samples; (b) participation is affected by compensation rate and task length but participants can still be recruited rapidly and inexpensively; (c) realistic compensation rates do not affect data quality; and (d) the data obtained are at least as reliable as those obtained via traditional methods.
Mechanical Turk (MTurk), an online labor market created by Amazon, has recently become popular among social scientists as a source of survey and experimental data. The workers who populate this market have been assessed on dimensions that are universally relevant to understanding whether, why, and when they should be recruited as research participants. We discuss the characteristics of MTurk as a participant pool for psychology and other social sciences, highlighting the traits of the MTurk samples, why people become MTurk workers and research participants, and how data quality on MTurk compares to that from other pools and depends on controllable and uncontrollable factors.
Clinical Psychology
Although participants with psychiatric symptoms, specific risk factors, or rare demographic characteristics can be difficult to identify and recruit for participation in research, participants with these characteristics are crucial for research in the social, behavioral, and clinical sciences. Online research in general and crowdsourcing software in particular may offer a solution. […] Findings suggest that crowdsourcing software offers several advantages for clinical research while providing insight into potential problems, such as misrepresentation, that researchers should address when collecting data online.
Economics
We argue that online experiments can be just as valid— both internally and externally—as laboratory and field experiments, while requiring far less money and time to design and to conduct. In this paper, we first describe the benefits of conducting experiments in online labor markets; we then use one such market to replicate three classic experiments and confirm their results. We confirm that subjects (1) reverse decisions in response to how a decision-problem is framed, (2) have pro-social preferences (value payoffs to others positively), and (3) respond to priming by altering their choices.
Management/Cognition
Although Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about the Mechanical Turk subject population, reviewing the strengths of Mechanical Turk relative to other online and offline methods of recruiting subjects, and comparing the magnitude of effects obtained using Mechanical Turk and traditional subject pools. We further discuss some additional benefits such as the possibility of longitudinal, cross cultural and prescreening designs, and offer some advice on how to best manage a common subject pool.
Biology
I review numerous replication studies indicating that AMT data is reliable. I also present two new experiments on the reliability of self-reported demographics. In the first, I use IP address logging to verify AMT subjects’ self-reported country of residence, and find that 97% of responses are accurate. In the second, I compare the consistency of a range of demographic variables reported by the same subjects across two different studies, and find between 81% and 98% agreement, depending on the variable. Finally, I discuss limitations of AMT and point out potential pitfalls.
[Update March 1st, 2016 : The APS Observer has a great summary article on MTurk : Under the Hood of Mechanical Turk ]
Watch this great overview lecture video about using Amazon Mechanical Turk for academic research (Gabriele Paolacci: The challenges of crowsourcing data collection in the social sciences):
Other articles
- Separate but equal? A comparison of participants and data gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing (Computers in Human Behavior, Nov2013).
- The relationship between motivation, monetary compensation, and data quality among US- and India-based workers on Mechanical Turk (Litman, Robinson, & Rosenzweig, 2014, BRM)
- Attentive Turkers: MTurk participants perform better on online attention checks than subject pool participants (Hauser & Schwarz, 2015, BRM) | Summary
- Comparing the Similarity of Responses Received from Studies in Amazon’s Mechanical Turk to Studies Conducted Online and with Direct Recruitment (Bartneck, Duenser, Moltchanova, & Zawieska, 2015, PLOSOne)
- Notes from a Day on the Forums: Recommendations for Maintaining a Good Reputation as an Amazon Mechanical Turk Requester (Yale David Rand’s lab, draft recommendations)
- Graduating from Undergrads: Are Mechanical Turk Workers More Attentive than Undergraduate Participants? (OSF)
- The Average Laboratory Samples a Population of 7,300 AmazonMechanical Turk Workers (JDM, 2015) (Summary post on Experimental Turk)
- MTurk ‘Unscrubbed’: Exploring the Good, the ‘Super’, and the Unreliable on Amazon’s Mechanical Turk
- Are samples drawn from Mechanical Turk valid for research on political ideology? (Research and Politics)
- The Generalizability of Survey Experiments (Journal of Experimental Political Science, 2015)
- Conducting Clinical Research Using Crowdsourced Convenience Samples (Annual Review of Clinical Psychology, 2016)
- Psychological research in the internet age: The quality of web-based data (Computers in Human Behavior, 2016) | reviewed on BPS
-
- Tosti-Kharas, J., & Conley, C. (2016). Coding Psychological Constructs in Text Using Mechanical Turk: A Reliable, Accurate, and Efficient Alternative. Frontiers in Psychology, 7, 741.
- Fifty Percent of Mechanical Turk Workers Have College Degrees, Study Finds (MotherBoard, 2016)
- Pew Research – Research in the Crowdsourcing Age, a Case Study (July, 2016)
- “Cargo Cult” science in traditional organization and information systems survey research: A case for using nontraditional methods of data collection, including Mechanical Turk and online panels (The Journal of Strategic Information Systems, 2016)
- Turking Overtime: How Participant Characteristics and Behavior Vary Over Time and Day on Amazon Mechanical Turk (Journal of the Economic Science Association, 2017)
- A Glimpse Far into the Future: Understanding Long-term Crowd Worker Accuracy (CSCW 2017)
- Replications with MTurkers who are naïve versus experienced with academic studies (2015) (JESP, 2016)
- Are all “research fields” equal? Rethinking practice for the use of data from crowdsourcing market places (BRM, 2016)
- Beyond the Turk: An Empirical Comparison of Alternative Platforms for Crowdsourcing Online Behavioral Research (preprint, 2016)
- Amazon Mechanical Turk in Organizational Psychology: An Evaluation and Practical Recommendations (JBP, 2016)
- Crowdsourcing Consumer Research (JCR, 2017)
- Lie for a Dime When Most Prescreening Responses Are Honest but Most Study Participants Are Impostors (SPPS, 2017)
- Crowdsourcing Samples in Cognitive Science (Trends in Cognitive Sciences, 2017)
- MTurk Character Misrepresentation: Assessment and Solutions (JCR, 2017)
- Validity and Mechanical Turk: An assessment of exclusion methods and interactive experiments (Computers in Human Behavior, 2017)
- Conducting interactive experiments online (Experimental Economics, 2018)
- Turkers and Canadian students did not differ in ability to label clip art and photographic images (BRM, 2018)
- Common Concerns with MTurk as a Participant Pool: Evidence and Solutions (preprint)
- How to Maintain Data Quality When You Can’t See Your Participants (Observer, 2019)
- Tapped Out or Barely Tapped? Recommendations for How to Harness the Vast and Largely Unused Potential of the Mechanical Turk Participant Pool (preprint)
- An MTurk Crisis? Shifts in Data Quality and the Impact on Study Results (SPPS, 2019)
- Berinsky, A. J., Margolis, M. F., & Sances, M. W. (2014). Separating the shirkers from the workers? Making sure respondents pay attention on self‐administered surveys. American Journal of Political Science, 58(3), 739-753.
- Anson, I. G. (2018). Taking the time? Explaining effortful participation among low-cost online survey participants. Research & Politics, 5(3), 2053168018785483.
- Hauser, D. J., & Schwarz, N. (2016). Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior research methods, 48(1), 400-407.
- Snowberg, E., & Yariv, L. (2018). Testing the waters: Behavior across participant pools (No. w24781). National Bureau of Economic Research.
- Gupta, N., Rigott, L., & Wilson, A. (2021). The Experimenters’ Dilemma: Inferential Preferences over Populations. arXiv preprint arXiv:2107.05064.
- Eyal, P., David, R., Andrew, G., Zak, E., & Ekaterina, D. (2021). Data quality of platforms and panels for online behavioral research. Behavior Research Methods, 1-20.
Before we begin, I think this article is a MUST read for anyone thinking of using MTurk for academic research : The Internet’s hidden science factory
From the article, I strongly recommend you watch this following video of a life of one MTurker :
Lessons learned (some of these are rather old, I would strongly advise you in revisiting these):
- You need to verify that participants read and understand your survey, and that they don’t randomly click their answers. For that I do the following:
- After each scenario, I run a quiz to test their understanding.
- Obviously, every part includes a check. A manipulation should always be tested, better with more than a single manipulation check.
- Add a timer for each page and include a check in your stat syntax to test whether they answered too fast.
- Include a funneling section and ask them what the survey was about and set a minimum characters answer. Go over the answers to see who puts in noise. Ofcourse, if you included a manipulation also test for suspicion and ask them what they thoughts the purpose was or whether they can see any connection between the manipulation and your tested DV.
- It goes without saying that you should test your survey before setting it off to the wild. But, very important point is to set email triggers and see that the answers you get are what they should be. It happened a few times that I discovered something wrong within the first ten participants, so I stopped the batch, corrected the mistake and restarted everything.
[UPDATE 2013/02/05 my answer to a discussion about this]
- The questionnaire should show participants you’re a serious researcher. Meaning :
- 2 or 3 comprehension quiz questions about scenarios that they have to get right to proceed to make sure they understood the scenario or what they need to do in a task.
- Decoy questions that go in opposite directions and randomized into scales (ones I use often – “the color of the grass is blue” “in the same week, Tuesday comes after Monday” “rich people have less money than poor people” etc.)
- Randomizing question and choice sequence for each section.
- Adding a funneling section.
- Adding a timer to all questions to check how much time they spent on each page and when they clicked on things.
- Between subject manipulations are better than a simple survey since different participants see different conditions and hence reduce the chances of simply sharing answers.
- There’s no escape from going over the answers in detail, checking the answer timing, checking for duplicates and reading the funneling section.
[end of UPDATE]
For problems with running MTurkers, read :
- Let’s keep discussing M Turk sample validity
- What’s a “valid” sample? Problems with Mechanical Turk study samples, part 1
- Fooled twice, shame on who? Problems with Mechanical Turk study samples, part 2
- My Experience as an Amazon Mechanical Turk (MTurk) Worker (Utpal Dholakia)
For the technical details on how to set things up read the following :
- Experiments using Mechanical Turk. Part 1
- Experiments using Mechanical Turk. Part 2
- THE TECHNICAL DETAILS, TUTORIALS, WALK-THROUGHS
- How to connect Qualtrics and mturk, Part II
- The right way to prevent duplicate workers – How to Block Past Workers from Doing Surveys
- MTurk + Qualtrics
- Guide to running Mturk experiments [2019]
There’s also a very helpful blog I strongly recommend that you visit – Experimental Turk which titles itself as A blog on social science experiments on Amazon Mechanical Turk. It hasn’t been updated for a while, but some viable info in there.
Tools :
- If you’re using MTurk for academic data collection, you absolutely must use Turkprime (read my review)
- Preventing MTurkers who participated in one study from participating in certain other studies – Turk Check.
- Various tools, I especially find the “Show URL after accept” Javascript trick useful.
- PsyTurk (see presentation here)
- How to setup notifications for HITs
- TaskMaster: A Tool for Determining When Subjects Are on Task (AMPPS, 2019)
- Mturk Sample Calculator : Sample Calculator
- OpenMTurk: An Open-Source Administration Tool for Designing Robust MTurk Studies (preprint)
Survey collection:
- Qualtrics surveys, ofcourse.
Multiple player games:
- Software Platform for Human Interaction Experiments (SoPHIE) (e.g. gossip games)
- “Breadboard is a software platform for developing and conducting human interaction experiments on networks. It allows researchers to rapidly design experiments using a flexible domain-specific language and provides researchers with immediate access to a diverse pool of online participants.”
- oTree offers integration with Amazon Mechanical Turk
- A great article about this – Conducting interactive experiments online (Experimental Economics, 2017) from the developers of LIONESS: Live Online Experimental Server Software
Further readings:
- Identifying Careless Responses in Survey Data (Meade & Craig, 2012, Psychological Methods) – an excellent article on careless responses with online and student samples. A worthy read. Another article is Detecting and Deterring Insufficient Effort Responding to Surveys (Huang et al., 2012, JBS)
- Deneme – a blog of experiments on Amazon Mechanical Turk (who created – Iterative Tasks on Mechanical Turk)
- Is Mechanical Turk the future of cognitive science research?
- Looking for Subjects? Amazon’s Mechanical Turk
- The Pros & Cons of Amazon Mechanical Turk for Scientific Surveys
- Experimenting on Mechanical Turk: 5 How Tos
- Slides from ACR 2012 (good tips)
- Evaluating Amazon’s Mechanical Turk as a Tool for Experimental Behavioral Research (published at PLOSone, with a related blog post)
- Mechanical Turk and Experiments in the Social Sciences
- How naïve are MTurk workers? and the followup response – mTurk: Method, Not Panacea and the followup post – Consequences of Worker Nonnaïvete: The Cognitive Reflection Test
- ITWorld – Experimenting on Mechanical Turk: 5 How Tos
- High quality MTurk data
- Graduating from undergrads: Are MTurk workers less attentive than undergraduate students? (Poster from Manylabs)
- Recent studies on MTurk validity (Mturk for academics, 2016)
- What’s a fair payment on #MTurk?
Alternatives to MTurk:
- StudyResponse
- For Australia – Microworkers (explained in this article – Crowdsourcing participants for psychological research inAustralia: A test of Microworkers)
- Prolific Academic (& Crowdflower, see Beyond the Turk: An Empirical Comparison of Alternative Platforms for Crowdsourcing Online Behavioral Research)
- Call for participants
- Find participants
- Reddit (see academic paper about this option)
- Findingfive
- Cosmos – a community science project
Got any other MTurk tips? have you had any experience running experiments on MTurk? Do share.
To your comment above regarding pay where you said: “One should be careful with money as an incentive for answering questionnaires on MTurk. I’ve actually found that 5 cents a questionnaire may at times yield higher quality results than a 2 dollar reward since it reduces the chance that people merely participate for the money. People still participate for 2-5 cents, and that couldn’t be just for the money in it.” What is your measure for “higher quality data”? Are you speaking in terms of statistically significant differences, or just trends in the data or something else? I ask this… Read more »
Thanks for the comments. These are important questions and I understand your concerns, but you’re raising a few very different issues. As for ‘high quality data’. As I point out above, I include attention checks and decoy questions throughout my studies, as well as quiz exams to make sure participants understood the scenario/task at hand. Higher quality data means less errors, less failing attention checks, and overall better responding to open-questions and tasks. You’ll notice another post in this blog about honesty, and that’s another factor that often comes into play. In this post I report the bottom line. I… Read more »
[…] wondering about the validity of using mTurk workers to do behavioral psychology experiments, the academic community in psychology, political science, and behavioral economics has embraced this … and find they get similar results from Turk panels as they’ve gotten historically from […]
I’m looking for an help in connecting my WordPress website – brain research study -to Amazone Mechanical Turk. Can you make any recommendations?
Hi Orly, I’m not exactly sure what you mean by connecting a wordpress website to MTurk. In what way? what are you trying to do?
Hi Gilad, my reserch study is long ,for 3 diffrent groups . i did a wordpress web site. I would like to link the website to Turk.
[…] Link Blog on the Turk: Link Tim Brady Tips: Link Princeton resources: Link Overview of HITs/Turkers: […]