Arama Sonuçları

Listeleniyor 1 - 3 / 3
  • Yayın
    Left/right and front/back in sign, speech, and co-speech gestures: what do data from Turkish sign language, croatian sign language, American sign language, Turkish, Croatian, and English reveal?
    (Versita, 2011-09) Arık, Engin
    Research has shown that spoken languages differ from each other in their representation of space. Using hands, body, and physical space in front of signers to represent space, do sign languages differ from each other? To what extent are they similar to spoken languages in their expressions of spatial relations? The present study targeted these questions by exploring the descriptions of static situations in sign languages (Turkish Sign Language, Croatian Sign Language, American Sign Language) and spoken languages, including co-speech gestures (Turkish, Croatian, and English). It is found that signed and spoken languages differ from each other in their linguistic constructions for the left/right and front/back spatial relation. They also differ from one another in their mapping strategies. Crucially, being a signer does not require more direct iconic mappings than a speaker would use. It is also found that co-speech gestures can complement spoken language descriptions.
  • Yayın
    The expressions of spatial relations during interaction in American sign language, Croatian sign language, and Turkish sign language
    (Versita, 2012-11) Arik, Engin
    Signers use their body and the space in front of them iconically. Does iconicity lead to the same mapping strategies in construing space during interaction across sign languages? The present study addressed this question by conducting an experimental study on basic static and motion event descriptions during interaction (describer input and addressee re-signing/retelling) in American Sign Language, Croatian Sign Language, and Turkish Sign Language. I found that the three sign languages are similar in using classifier predicates of location, orientation, and movement, predominantly employing an egocentric (viewer) perspective but also a non-egocentric perspective, and using similar mapping strategies regardless of interlocutor positions. However, these three sign languages differ from each other in the effects of location and orientation of the objects in pictures and movies, the descriptions of picture (states) vs. movie (motion events), and describer input vs. addressee retellings in their mapping strategies. This study contributes to our knowledge of how the expressions of spatial relations are conveyed in natural human language.
  • Yayın
    Sarcasm detection on news headlines using transformers
    (Springer, 2025-09-07) Gümüşçekiçci, Gizem; Dehkharghani, Rahim
    Sarcasm poses a linguistic challenge due to its figurative nature, where intended meaning contradicts literal interpretation. Sarcasm is prevalent in human communication, affecting interactions in literature, social media, news, e-commerce, etc. Identifying the true intent behind sarcasm is challenging but essential for applications in sentiment analysis. Detecting sarcasm in written text, as a challenging task, has attracted many researchers in recent years. This paper attempts to detect sarcasm in news headlines. Journalists prefer using sarcastic news headlines as they seem much more interesting to the readers. In the proposed methodology, we experimented with Transformers, namely the BERT model, and several Machine and Deep Learning models with different word and sentence embedding methods. The proposed approach inherently requires high-performance resources due to the use of large-scale pre-trained language models such as BERT. We also extended an existing news headlines dataset for sarcasm detection using augmentation techniques and annotating it with hand-crafted features. The proposed methodology could outperform almost all existing sarcasm detection approaches with a 98.86% F1-score when applied to the extended news headlines dataset, which we made publicly available on GitHub.