Abstract: The emergence of generative artificial intelligence (AI) and its widespread adoption across digital platforms have fundamentally altered the nature of online communication and cybersecurity threats. AI-generated content (AIGC), encompassing synthetic text, images, audio, and multimedia, has become increasingly indistinguishable from human-created material. While these advancements provide significant benefits in productivity and accessibility, they also introduce serious security challenges. One of the most critical concerns is the heightened susceptibility of individuals to phishing and social engineering attacks, as AI-generated content enables adversaries to exploit human cognitive biases with unprecedented realism and scale.

This research paper examines the intersection of human behaviour, phishing click-through risks, and the proliferation of AI-generated content. It explores how cognitive biases, emotional triggers, and information-processing limitations contribute to user vulnerability when exposed to sophisticated AI-crafted phishing messages. The paper further analyses recent advancements in AI-generated content technologies, empirical evidence of their misuse in real-world platforms, and current defensive mechanisms such as watermarking, scalable detection frameworks, and behavioral interventions. By synthesizing contemporary research, this study emphasizes the necessity of integrating human-centric approaches with technical defences to effectively mitigate AI-driven phishing threats.

Keywords: AI-generated content, Phishing attacks, Human behaviour, Cognitive bias, Social engineering, Cybersecurity.


Download: PDF | DOI: 10.17148/IMRJR.2026.030202

Cite:

[1] Sushant Patil, Kanchan Patil, "Human Behaviours & Phishing Click-Through Risks Under AI-Generated Content," International Multidisciplinary Research Journal Reviews (IMRJR), 2026, DOI 10.17148/IMRJR.2026.030202