Investigating Safety Vulnerabilities of Large Audio-Language Models Under Speaker Emotional Variations
Abstract
Research investigates the impact of speaker emotion on the safety of large audio-language models, revealing inconsistencies and vulnerabilities that require targeted alignment strategies.
Large audio-language models (LALMs) extend text-based LLMs with auditory understanding, offering new opportunities for multimodal applications. While their perception, reasoning, and task performance have been widely studied, their safety alignment under paralinguistic variation remains underexplored. This work systematically investigates the role of speaker emotion. We construct a dataset of malicious speech instructions expressed across multiple emotions and intensities, and evaluate several state-of-the-art LALMs. Our results reveal substantial safety inconsistencies: different emotions elicit varying levels of unsafe responses, and the effect of intensity is non-monotonic, with medium expressions often posing the greatest risk. These findings highlight an overlooked vulnerability in LALMs and call for alignment strategies explicitly designed to ensure robustness under emotional variation, a prerequisite for trustworthy deployment in real-world settings.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Benchmarking Gaslighting Attacks Against Speech Large Language Models (2025)
- Beyond Text: Multimodal Jailbreaking of Vision-Language and Audio Models through Perceptually Simple Transformations (2025)
- VStyle: A Benchmark for Voice Style Adaptation with Spoken Instructions (2025)
- SARSteer: Safeguarding Large Audio Language Models via Safe-Ablated Refusal Steering (2025)
- Do Audio LLMs Really LISTEN, or Just Transcribe? Measuring Lexical vs. Acoustic Emotion Cues Reliance (2025)
- VoiceAgentBench: Are Voice Assistants ready for agentic tasks? (2025)
- SageLM: A Multi-aspect and Explainable Large Language Model for Speech Judgement (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper