Sowmya Karunakaran

Sowmya Karunakaran

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Testing Stylistic Interventions to Reduce Emotional Impact in Content Moderation Workers
    Rashmi Ramakrishnan
    AAAI Conference on Human Computation and Crowdsourcing 2019(2019), pp. 50-58
    Preview abstract With the rise in user generated content, there is a greater need for content reviews. While machines and technology play a critical role in content moderation, there is still a need for manual reviews. It is known that such manual reviews could be emotionally challenging. We test the effects of simple interventions like grayscaling and blurring to reduce the emotional impact of such reviews. We demonstrate this by bringing in interventions in a live content review setup thus allowing us to maximize external validity. We use a pre-test post-test experiment design and measure review quality, average handling time and emotional affect using the PANAS Scale to measure emotional affect. We find that simple grayscale transformations can provide an easy to implement and use solution that can significantly change the emotional impact of content reviews. We observe however that a full blur intervention can be challenging to reviewers. View details
    Spam in User Generated Content Platforms: Developing the HaBuT Instrument to Measure User Experience
    Erik Brorson
    IEEE International Conference on Systems, Man, and Cybernetics, IEEE(2019) (to appear)
    Preview abstract We are in an era of user-generated content (UGC) but our understanding of the impact of UGC spam on user experience is limited. Most prior instruments to measure user experience were developed in the context of traditional spam types like web-spam or email-spam. In this paper, we develop the 15 item HaBuT scale, consisting of three sub-scales: Happiness, Burden and Trust that measures user experience with respect to UGC spam. The items in the instrument are analyzed using confirmatory factor analysis with a sample of 700 responses from internet users. This process resulted in an instrument of high reliability and validity. The instrument is a valuable tool for researchers and practitioners interested in designing, implementing, and managing systems that rely on user-generated content and to those studying the impact of UGC spam on user experience. We demonstrate a real-world application of the HaBuT scale by applying it to investigate the impact of review spam on mobile apps users. We present the study results of online experiments with 3300 participants across US, India and South Korea. View details
    Testing Grayscale Interventions to Reduce Negative Emotional Impact on Manual Reviewers
    Rashmi Ramakrishnan
    Symposium on Computing and Mental Health, CHI Workshop '19(2019)
    Preview
    Preview abstract Data exposed by breaches persist as a security and privacy threat for Internet users. Despite this, best practices for how companies should respond to breaches, or how to responsibly handle data after it is leaked, have yet to be identified. We bring users into this discussion through two surveys. In the first, we examine the comprehension of 551 participants on the risks of data breaches and their sentiment towards potential remediation steps. In the second survey, we ask 10,212 participants to rate their level of comfort towards eight different scenarios that capture real-world examples of security practitioners, researchers, journalists, and commercial entities investigating leaked data. Our findings indicate that users readily understand the risk of data breaches and have consistent expectations for technical and non-technical remediation steps. We also find that participants are comfortable with applications that examine leaked data---such as threat sharing or a "hacked or not'' service---when the application has a direct, tangible security benefit. Our findings help to inform a broader discussion on responsible uses of data exposed by breaches. View details
    Preview abstract A system and method are disclosed of verifying authenticity of a message using embedded hyperlinks therein. The method works by checking that the hyperlinks the text points to match the companies/brands the text portrays. The verification makes use of a knowledgebase of company/brand information as a source for retrieving the known sites/pages/domains for those companies/brands. The method then estimates a probabilistic measure of identity checking based on matches between the companies/brands mentioned in the text and the outgoing links from the text. In a variation, the method may additionally use contextual information of the message text to determine authenticity to compute a coverage score. The coverage score is then input into a machine learning model that uses these along with other known indicators of genuineness to determine that the message or Web page is either genuine or not. View details