LLMs are Vulnerable to Malicious Prompts Disguised as Scientific LanguagePublished in Under Review, 2025Neeraja Kirtane*, Yubin Ge, Hao Peng, Dilek Hakkani-TürShare on Twitter Facebook LinkedIn Previous Next