2 sonuçlar
Arama Sonuçları
Listeleniyor 1 - 2 / 2
Yayın The function of regressions in reading: Backward eye movements allow rereading(Springer, 2013-01) Booth, Robert William; Weger, Ulrich W.Standard text reading involves frequent eye movements that go against normal reading order. The function of these "regressions" is still largely unknown. The most obvious explanation is that regressions allow for the rereading of previously fixated words. Alternatively, physically returning the eyes to a word's location could cue the reader's memory for that word, effectively aiding the comprehension process via location priming (the "deictic pointer hypothesis"). In Experiment 1, regression frequency was reduced when readers knew that information was no longer available for rereading. In Experiment 2, readers listened to auditorily presented text while moving their eyes across visual placeholders on the screen. Here, rereading was impossible, but deictic pointers remained available, yet the readers did not make targeted regressions in this experiment. In Experiment 3, target words in normal sentences were changed after reading. Where the eyes later regressed to these words, participants generally remained unaware of the change, and their answers to comprehension questions indicated that the new meaning of the changed word was what determined their sentence representations. These results suggest that readers use regressions to reread words and not to cue their memory for previously read words.Yayın Left/right and front/back in sign, speech, and co-speech gestures: what do data from Turkish sign language, croatian sign language, American sign language, Turkish, Croatian, and English reveal?(Versita, 2011-09) Arık, EnginResearch has shown that spoken languages differ from each other in their representation of space. Using hands, body, and physical space in front of signers to represent space, do sign languages differ from each other? To what extent are they similar to spoken languages in their expressions of spatial relations? The present study targeted these questions by exploring the descriptions of static situations in sign languages (Turkish Sign Language, Croatian Sign Language, American Sign Language) and spoken languages, including co-speech gestures (Turkish, Croatian, and English). It is found that signed and spoken languages differ from each other in their linguistic constructions for the left/right and front/back spatial relation. They also differ from one another in their mapping strategies. Crucially, being a signer does not require more direct iconic mappings than a speaker would use. It is also found that co-speech gestures can complement spoken language descriptions.












