NAVIGATING EMOTIONS ACROSS BORDERS: DEEP LEARNING-DRIVEN LOCATION-INFORMED SENTIMENT ANALYSIS OF TWITTER
Emotion assessment, a pivotal domain in the realm of natural language processing, holds significant importance in comprehending the feelings expressed through textual content. It has progressed to include the intricate interplay of emotions found in textual information. This paper introduces a pioneering approach to sentiment analysis by amalgamating deep learning techniques and geographic context within the realm of Twitter data. Leveraging an expanded sentiment class set that includes positive, negative, neutral, mixed, ambiguous, happy, sad, angry, fearful, and surprised. Our framework aptly captures diverse emotional expressions. Incorporating location-based sentiment analysis unveils cross-border sentiment dynamics, enriching our understanding of how emotions resonate within various geographical regions.
We present a meticulously designed deep learning model that seamlessly integrates textual content and location information. Through the utilization of text vectorization, embedding layers, and advanced classification techniques, our model achieves exceptional accuracy, F1-score, precision, and recall values. The temporal analysis of tweet timestamps uncovers temporal engagement trends, while the examination of tweet lengths underscores the dynamic range of expression within the Twitter character limitation.
Furthermore, our investigation into the locations reveals Twitter's global presence, with the United States, United Kingdom, and Ukraine emerging as key hubs of activity. This geographical insight augments our comprehension of the platform's diverse user interactions.
This paper not only offers insights into sentiment analysis but also paves the way for future research exploring sentiment dynamics, language variations, and real-time interactions within the Twitter landscape.
Copyright (c) 2023 Lahore Garrison University Research Journal of Computer Science and Information Technology
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.