ผลต่างระหว่างรุ่นของ "หน้าหลัก"
ล |
ล |
||
แถว 1: | แถว 1: | ||
− | + | Mixed effects models to assess in the event the unique variables analyzed (difference of speeds, speed of the player and speed of the opponent) can discriminate among the type of interaction, finding that only the collective variable in the relative speeds can discriminate the two kinds of programmed agents from genuine human interaction (Section 4.3).three. Components AND METHODS3.1. EXPERIMENTAL PROCEDUREIn this experiment, human participants were allocated computers to interact in pairs, inside a shared perceptual space, where some opponents had been other human participants and some opponents had been computerized agents (bots) but participants were unaware on the nature of their opponents. Our intention was not to make a duplication of Auvray's experiment where each participant simultaneously encounters a human partner, a mobile agent and a static a single. In our case, every participant received only a single stimulus in one of many following scenarios: human vs. human, human vs. "oscillatory agent" and human vs. "shadow agent." The "oscillatory agent" was programmed to deploy a sinusoidal behavior (describing a sinusoidal trajectory of 0.five Hz and 200 pixels of amplitude), predictable and deterministic. In contrast, the "shadow agent" was in a position to show an irregular pattern since it consists of a "shadow image" of the participant (i.e., a bot that generates a movement strictly identical towards the participant trajectory but delayed 400 ms. in time and 125 pixels in space). Participants had been instructed to make an effort to detect wether their opponent was human or not and asked to fill a questionary (despite the fact that the analysis of the participants responses is out on the scope of this paper). When participants arrived in the laboratory they were randomly assigned to a workstation and had been offered with headphones. They have been informed that the study involved two components, every independent from the other and that the first one--training stage--would take approximately 3 min and the second one-- evaluation stage--a further 10 min. In an effort to guarantee confidentiality during the study, identification codes/nicknames had been chosen by the participants. Throughout the experiment, participants were supplied with verbal instructions with regards to the structure with the experiment and their sections. Within the training stage, the participants had been informed that it was a straightforward "proof of concept" stage and that the purpose was only to find out how the platform worked. Participants had been totally free to move the mouse as they pleased throughout three sessions of 1 min each having a quick break involving them. They played consecutively against 3 bots of rising difficulty within the interaction: a static bot, a bot moving at a continual low speed along with a bot moving at a continuous medium speed. Following that, they were informed of the aim and guidelines in the evaluation a part of the experiment. The experiment consisted of ten sessions of 40 s every single. In every single session: (i) every single participant was randomly assigned an opponent (human-human or human-bot) to explore the virtual space; (ii) participants had been asked to move their mouses as a way to detect the movement of their assigned opponents, (iii) immediately after every single session, participants were asked tomake a option between the two selections displayed around the screen in an effort to guess no matter if their opponent was a human or a bot, and (iv), finally, participants were informed around the screen no matter whether or not they had guessed effectively. Soon after the ten sessions had been completed, the experiment was declared finish. | |
− | |||
− |
รุ่นแก้ไขเมื่อ 19:40, 22 มิถุนายน 2564
Mixed effects models to assess in the event the unique variables analyzed (difference of speeds, speed of the player and speed of the opponent) can discriminate among the type of interaction, finding that only the collective variable in the relative speeds can discriminate the two kinds of programmed agents from genuine human interaction (Section 4.3).three. Components AND METHODS3.1. EXPERIMENTAL PROCEDUREIn this experiment, human participants were allocated computers to interact in pairs, inside a shared perceptual space, where some opponents had been other human participants and some opponents had been computerized agents (bots) but participants were unaware on the nature of their opponents. Our intention was not to make a duplication of Auvray's experiment where each participant simultaneously encounters a human partner, a mobile agent and a static a single. In our case, every participant received only a single stimulus in one of many following scenarios: human vs. human, human vs. "oscillatory agent" and human vs. "shadow agent." The "oscillatory agent" was programmed to deploy a sinusoidal behavior (describing a sinusoidal trajectory of 0.five Hz and 200 pixels of amplitude), predictable and deterministic. In contrast, the "shadow agent" was in a position to show an irregular pattern since it consists of a "shadow image" of the participant (i.e., a bot that generates a movement strictly identical towards the participant trajectory but delayed 400 ms. in time and 125 pixels in space). Participants had been instructed to make an effort to detect wether their opponent was human or not and asked to fill a questionary (despite the fact that the analysis of the participants responses is out on the scope of this paper). When participants arrived in the laboratory they were randomly assigned to a workstation and had been offered with headphones. They have been informed that the study involved two components, every independent from the other and that the first one--training stage--would take approximately 3 min and the second one-- evaluation stage--a further 10 min. In an effort to guarantee confidentiality during the study, identification codes/nicknames had been chosen by the participants. Throughout the experiment, participants were supplied with verbal instructions with regards to the structure with the experiment and their sections. Within the training stage, the participants had been informed that it was a straightforward "proof of concept" stage and that the purpose was only to find out how the platform worked. Participants had been totally free to move the mouse as they pleased throughout three sessions of 1 min each having a quick break involving them. They played consecutively against 3 bots of rising difficulty within the interaction: a static bot, a bot moving at a continual low speed along with a bot moving at a continuous medium speed. Following that, they were informed of the aim and guidelines in the evaluation a part of the experiment. The experiment consisted of ten sessions of 40 s every single. In every single session: (i) every single participant was randomly assigned an opponent (human-human or human-bot) to explore the virtual space; (ii) participants had been asked to move their mouses as a way to detect the movement of their assigned opponents, (iii) immediately after every single session, participants were asked tomake a option between the two selections displayed around the screen in an effort to guess no matter if their opponent was a human or a bot, and (iv), finally, participants were informed around the screen no matter whether or not they had guessed effectively. Soon after the ten sessions had been completed, the experiment was declared finish.