Home
Scholarly Works
Artificial intelligence, adversarial attacks, and...
Journal article

Artificial intelligence, adversarial attacks, and ocular warfare

Abstract

Purpose We explore the potential misuse of artificial intelligence (AI), specifically large language models (LLMs), in generating harmful content related to ocular warfare. By examining the vulnerabilities of AI systems to adversarial attacks, we aim to highlight the urgent need for robust safety measures, enforceable regulation, and proactive ethics. Design A viewpoint paper discussing the ethical challenges posed by AI, using ophthalmology as a case study. It examines the susceptibility of AI systems to adversarial attacks and the potential for their misuse in creating harmful content. Methods The study involved crafting adversarial prompts to test the safeguards of a well-known LLM, OpenAI's ChatGPT-4.0. The focus was on evaluating the model's responses to hypothetical scenarios aimed at causing ocular damage through biological, chemical, and physical means. Results The AI provided detailed responses on using Onchocerca volvulus for mass infection, methanol for optic nerve damage, mustard gas for severe eye injuries, and high-powered lasers for inducing blindness. Despite significant safeguards, the study revealed that with enough effort, it was possible to bypass these constraints and obtain harmful information, underscoring the vulnerabilities in AI systems. Conclusion AI holds the potential for both positive transformative change and malevolent exploitation. The susceptibility of LLMs to adversarial attacks and the possibility of purposefully trained unethical AI systems present significant risks. This paper calls for improved robustness of AI systems, global legal and ethical frameworks, and proactive measures to ensure AI technologies benefit humanity and do not pose threats.

Authors

Balas M; Wong DT; Arshinoff SA

Journal

AJO International, Vol. 1, No. 3,

Publisher

Elsevier

Publication Date

October 3, 2024

DOI

10.1016/j.ajoint.2024.100062

ISSN

2950-2535
View published work (Non-McMaster Users)

Contact the Experts team