Tuesday, September 4, 2012

Paper Reading #3: Protecting artificial team-mates: more seems like less

Introduction
Title: Protecting artificial team-mates: more seems like less
Author Bios: Tim Merritt National University of Singapore, Singapore, Singapore
Kevin McGee National University of Singapore, Singapore, Singapore

Summary
In this study they had participants play a video game with a AI and then had them play with a presumed human(PH) which was actually the same AI doing the same exact thing. The purpose was to see how the players played when they thought they were on a team with an AI versus when they thought they were on a team with another human. The point of the game was to have both players touch the gunner in the middle, they didn't have to touch at the same time but both had to touch to move on to the next level. The gunner scanned in a circle until it reached a player and then fired, the player could distract the gunner with the 'W' key or they could run into the field of vision of the gunner. What the study was on was how much the player hit the 'W' key to protect the AI or PH.

Related work
  • A Failure of Imagination: How and Why People Respond Differently to Human and Computer Team-Mates.
  • Proactive information exchanges based on the awareness of teammates' information needs
  • Human-centered design in synthetic teammates for aviation: The challenge for artificial intelligence
  • What we have here is a failure of companionship: communication in goal-oriented team-mate games
  • Choosing human team-mates: perceived identity as a moderator of player preference and enjoyment
  • Real-time team-mate AI in games: a definition, survey, & critique
  • Using artificial team members for team training in virtual environments
  • Can computers be teammates?
  • The media equation: how people treat computers, television, and new media like real people and places
  • Are computers scapegoats?: attributions of responsibility in human-computer interaction
This paper is novel as there is no other paper on this specific topic. They did reference papers correctly 

Evaluation
To evaluate the study they used several questions with a Likert-type scale so it was a quantitative and subjective approach to evaluate the results. They also used a qualitative question asking who the player protected more and why. What they found was that even though players protected the AI more they said in the questionnaire that they protected the human more. They then asked the Likert-type questions to observe stereotypes  and personal pressures. They also had players watch videos of 2 AI, 1 AI and 1 PH, and explain their behaviors in an open response so they could see how the players thought a human acted over how an AI acted.

Discussion
I thought this contribution was actually pretty interesting, since when starting the paper I thought that the players would naturally protect the human players more but was proven wrong when the opposite happened. Also they fact that the players thought they were protecting the humans more too even though they in fact protected the AI more. It is novel since no one has ever studied this field in this way.

No comments:

Post a Comment