Accepted at Privacy Enhancing Technologies Symposium (PETS), 2025
The immersive nature of Virtual Reality (VR) and its reliance on sensory devices like head-mounted displays introduce privacy risks to users. While earlier research has explored users’ privacy concerns within VR environments, less is known about users’ comprehension of VR data practices and protective behaviors; the expanding VR market and technological progress also necessitate a fresh evaluation. We conducted semi-structured interviews with 20 VR users, showing their diverse perceptions regarding the types of data collected and their intended purposes. We observed privacy concerns in three dimensions: institutional, social, and device-specific.
The immersive nature of Virtual Reality (VR) and its reliance on sensory devices like head-mounted displays introduce privacy risks to users. While earlier research has explored users’ privacy concerns within VR environments, less is known about users’ comprehension of VR data practices and protective behaviors; the expanding VR market and technological progress also necessitate a fresh evaluation. We conducted semi-structured interviews with 20 VR users, showing their diverse perceptions regarding the types of data collected and their intended purposes. We observed privacy concerns in three dimensions: institutional, social, and device-specific. Our participants sought to protect their privacy through considerations when selecting the device, scrutinizing VR apps, and selective engagement in different VR interactions. We contrast our findings with observations from other technologies and ecosystems, shedding light on how VR has altered the privacy landscape for end users. We further offer recommendations to alleviate users’ privacy concerns, rectify misunderstandings, and encourage the adoption of privacy-conscious behaviors.
Read More
Analyzing Ad Prevalence, Characteristics, and Compliance in Alexa Skills
Accepted at IEEE Symposium on Security and Privacy (Oakland), 2025
With the rapid adoption of smart voice assistants like Amazon Alexa and the potential for more growth with large language model-powered assistants, as well as the introduction of advertising ID within Alexa, it is inevitable that advertisements (ads) will become prevalent on such platforms if not already. Although Alexa permits third-party developers to include ads within voice apps (known as skills) and enables targeted advertisement through ad identifiers, Alexa also lists an ad policy that restricts ads within skill responses, notifications, or reminders except in defined cases. However, it remains unclear whether all developers comply with these policies or attempt to bypass vetting processes to publish non-compliant ads. This paper presents the first large-scale analysis of advertising on the Alexa platform, examining ad prevalence, characteristics, and adherence to platform policies.
With the rapid adoption of smart voice assistants like Amazon Alexa and the potential for more growth with large language model-powered assistants, as well as the introduction of advertising ID within Alexa, it is inevitable that advertisements (ads) will become prevalent on such platforms if not already. Although Alexa permits third-party developers to include ads within voice apps (known as skills) and enables targeted advertisement through ad identifiers, Alexa also lists an ad policy that restricts ads within skill responses, notifications, or reminders except in defined cases. However, it remains unclear whether all developers comply with these policies or attempt to bypass vetting processes to publish non-compliant ads. This paper presents the first large-scale analysis of advertising on the Alexa platform, examining ad prevalence, characteristics, and adherence to platform policies. We introduce an automated ad detection method using a fine-tuned large language model (LLM) with 88.92% accuracy and, using chain-of-thought (CoT) prompting, achieve 94.52% accuracy in identifying potential policy-violating ads. Analyzing 45,477 Alexa skills, we find that 13.58% include ads or promotional content, with themes such as travel and entertainment. Notably, some ads come from skills by Amazon-promoted agencies like “Vixen Labs” while others are generated by agencies solely focused on voice assistant platforms, such as "Skilled Creative." Our model identifies approximately 29.18% of ads as possible policy violations. We reported our findings to Amazon, resulting in a bug bounty reward. The proposed system aims to enhance Alexa’s vetting by automatically flagging potential ad violations and demonstrates how fine-tuned LLMs can support policy enforcement on voice platforms.
Read More
Enabling Developers, Protecting Users: Investigating Harassment and Safety in VR
Accepted at USENIX Security Symposium, 2024
Virtual Reality (VR) has witnessed a rising issue of harassment, prompting the integration of safety controls like muting and blocking in VR applications. However, the lack of standardized safety measures across VR applications hinders their universal effectiveness, especially across contexts like socializing, gaming, and streaming. While prior research has studied safety controls in social VR applications, our user study (n = 27) takes a multi-perspective approach, examining both users' perceptions of safety control usability and effectiveness as well as the challenges that developers face in designing and deploying VR safety controls. We identify challenges VR users face while employing safety controls, such as finding users in crowded virtual spaces to block them.
Virtual Reality (VR) has witnessed a rising issue of harassment, prompting the integration of safety controls like muting and blocking in VR applications. However, the lack of standardized safety measures across VR applications hinders their universal effectiveness, especially across contexts like socializing, gaming, and streaming. While prior research has studied safety controls in social VR applications, our user study (n = 27) takes a multi-perspective approach, examining both users' perceptions of safety control usability and effectiveness as well as the challenges that developers face in designing and deploying VR safety controls. We identify challenges VR users face while employing safety controls, such as finding users in crowded virtual spaces to block them. VR users also find controls ineffective in addressing harassment; for instance, they fail to eliminate the harassers' presence from the environment. Further, VR users find the current methods of submitting evidence for reports time-consuming and cumbersome. Improvements desired by users include live moderation and behavior tracking across VR apps; however, developers cite technological, financial, and legal obstacles to implementing such solutions, often due to a lack of awareness and high development costs. We emphasize the importance of establishing technical and legal guidelines to enhance user safety in virtual environments.
Read More
Unveiling Users’ Security and Privacy Concerns Regarding Smart Home IoT Products from Online Reviews
Accepted at IEEE Symposium on Security and Privacy (Oakland), 2024
The Internet of Things (IoT) has revolutionized the global market with lifestyle products such as fitness trackers (FT), smart home speakers (SHS), and surveillance and security camera systems (SSCS). While offering convenience, these products also introduce potential security and privacy (S&P) risks to buyers, often going unnoticed. Consumers’ incomplete mental models of the risks involved and the information asymmetry between buyers and sellers only add to the problem. Understanding consumer concerns in online product reviews can play a crucial role in bridging the gap of such information asymmetry. By establishing a balanced flow of information between buyers and sellers, manufacturers can leverage genuine concerns expressed in reviews to enhance product features while educating users about misinformation in reviews. In this study, we collected FT, SHS, and SSCS product reviews from three Amazon domains: the US, the UK, and India.
The Internet of Things (IoT) has revolutionized the global market with lifestyle products such as fitness trackers (FT), smart home speakers (SHS), and surveillance and security camera systems (SSCS). While offering convenience, these products also introduce potential security and privacy (S&P) risks to buyers, often going unnoticed. Consumers’ incomplete mental models of the risks involved and the information asymmetry between buyers and sellers only add to the problem. Understanding consumer concerns in online product reviews can play a crucial role in bridging the gap of such information asymmetry. By establishing a balanced flow of information between buyers and sellers, manufacturers can leverage genuine concerns expressed in reviews to enhance product features while educating users about misinformation in reviews. In this study, we collected FT, SHS, and SSCS product reviews from three Amazon domains: the US, the UK, and India. Using a keyword-based search method focused on S&P concerns, we discovered a considerable number of reviews expressing notable concerns regarding security and privacy. Our qualitative analysis revealed that data security is a common concern across all product types. Further, our quantitative analysis exposed significant geographic variations, with the concern ratio being higher in the US than in the UK for all device types and higher than in the Indian domain for security cameras. These findings highlight the need for tailored security measures and user awareness campaigns in different parts of the world to address the identified concerns effectively.
Read More
Understanding Parents’ Perceptions and Practices Toward Children’s Security and Privacy in Virtual Reality
Accepted at IEEE Symposium on Security and Privacy (Oakland), 2024
Recent years have seen a sharp increase in underage users of virtual reality (VR), where security and privacy (S&P) risks such as data surveillance and self-disclosure in social interaction have been increasingly prominent. Prior work shows children largely rely on parents to mitigate S&P risks in their technology use. Therefore, understanding parents' S&P knowledge, perceptions, and practices is critical for identifying the gaps for parents, technology designers, and policymakers to enhance children's S&P. While such empirical knowledge is substantial in other consumer technologies, it remains largely unknown in the context of VR. To address the gap, we conducted in-depth semi-structured interviews with 20 parents of children under the age of 18 who use VR at home.
Recent years have seen a sharp increase in underage users of virtual reality (VR), where security and privacy (S&P) risks such as data surveillance and self-disclosure in social interaction have been increasingly prominent. Prior work shows children largely rely on parents to mitigate S&P risks in their technology use. Therefore, understanding parents' S&P knowledge, perceptions, and practices is critical for identifying the gaps for parents, technology designers, and policymakers to enhance children's S&P. While such empirical knowledge is substantial in other consumer technologies, it remains largely unknown in the context of VR. To address the gap, we conducted in-depth semi-structured interviews with 20 parents of children under the age of 18 who use VR at home. Our findings highlight parents generally lack S&P awareness due to the perception that VR is still in its infancy. To protect their children's interaction with VR, parents currently primarily rely on active strategies such as verbal education about S&P. Passive strategies such as parental controls in VR are not commonly used among our interviewees, mainly due to their perceived technical constraints. Parents also highlight that a multi-stakeholder ecosystem must be established towards more S&P support for children in VR. Based on the findings, we propose actionable S&P recommendations for critical stakeholders, including parents, educators, VR companies, and governments.
Read More
A Question Answering and Quiz Generation Chatbot for Education
Published in the proceedings of Grace Hopper Celebration India (GHCI), 2019
There have been a number of chatbots developed for education. While many of them are designed to answer queries based on publicly available or predefined knowledge base, there is no possibility of customizing the information to be queried. There are no chatbots capable of generating self assessment quizzes based on any given document. This paper proposes a Question Answering and Quiz Generation Chatbot that allows a user to perform answer extraction and question generation on any input document.
A Survey of Techniques for Improving Security of GPUs
Published in the Journal of Hardware and Systems Security, 2018
Graphics processing unit (GPU), although a powerful performance-booster, also has many security vulnerabilities. Due to these, the GPU can be vulnerable to stealthy malware. In this paper, we present a survey of techniques for analyzing and improving GPU security. We classify the works on key attributes to highlight their similarities and differences. Alongside informing users and researchers about GPU security techniques, this survey aims to increase their awareness about GPU security vulnerabilities and potential countermeasures.
Updates
Feb 2025: Headset Harms! My work on VR security, privacy and safety was featured in NC State University's alumni magazine
Oct 2024: I’m presenting my work on VR security & privacy at the Triangle Area Privacy and Security Day (TAPS) in Duke University
Aug 2024: I’m presenting my paper on VR harassment and safety at USENIX Security 2024!
Aug 2024: New Paper Accepted: Our work on Privacy Expectations, Concerns, and Behaviors of Virtual Reality users has been accepted at PETS 2025(preprint)
May 2024: Our work on understanding parents' perceptions of their children's VR usage was covered by NC State CSC news and Duke Today!
May 2024: I'm attending the RSA conference as a Security Scholar in San Franscisco!
Mar 2024: New Paper Accepted: Our work on "Understanding Parents’ Perceptions and Practices Toward Children’s Security and Privacy in Virtual Reality" has been accepted at IEEE S&P 2024(preprint)
Feb 2024: First Paper Accepted: Our work on VR harassment and safety controls ("Enabling Developers, Protecting Users: Investigating Harassment and Safety in VR") has been accepted at USENIX Security 2024(preprint)
Feb 2024: I’m presenting our work on VR harassment and safety controls at the 3rd Annual North Carolina Cybersecurity Symposium
Nov 2023: I passed my Written Preliminary Examination at NC State!
Aug 2023: I’m serving on the USENIX Security '24 Artifact Evaluation committee
Aug 2023: I’m presenting a poster titled "Virtual Adventures, Real Challenges: Analyzing Harassment Controls in VR" at USENIX Security 2023
Aug 2023: I’m attending GREPSEC VI!
Jun 2023: I’m serving on the USENIX Security '23 Artifact Evaluation committee
Apr 2023: I've been honored with the College of Engineering Graduate Enhancement Award at NC State
Apr 2023: I'm presenting a poster at the Graduate Student Symposium, NC State
Mar 2023: I'm attending WiCys 2023 at Denver, Colorado
Jan 2023: I'm the teaching assistant for CSC 433: Privacy in the Digital Age
Aug 2022: Started my Ph.D. at NC State!