A new report by UWindsor Human Kinetics researchers explores the risks of AI in maritime defence (CANADIAN MARITIME SECURITY NETWORK; CANVA STOCK/University of Windsor)
By Lori Bona
From navigation to monitoring ocean traffic, artificial intelligence (AI) is playing an increasing role in ships and maritime operations — including those used by the Canadian Armed Forces.
But relying too heavily on AI in maritime vessels introduces new risks for human operators and defence systems, according to a new report from researchers at the University of Windsor.
“AI is everywhere; it has become ubiquitous for nearly every task, with millions of people relying on it more and more,” says lead author and master’s student in the Faculty of Human Kinetics Julie Webeck.
“Whether we are talking about cars or ships, the risks of overusing AI are similar — and in remote areas or in a military context, the implications can be even more serious.”
For the report, published recently by the Canadian Maritime Security Network, the researchers reviewed existing literature on maritime autonomy, including studies of real ferry operations, autonomous ferry trials, ship bridge simulations and interviews with navigators and system designers.
They found that AI can help improve maritime operations by analyzing large volumes of data, detecting unusual activity and coordinating with international partners.
However, heavy reliance can shift humans from actively controlling the ship to mainly monitoring technology, reducing how engaged operators remain with their surroundings.
This includes reduced situational awareness, the erosion of skills and trust issues, with operators either overtrusting systems or ignoring important information. “Both situations can have catastrophic consequences,” Webeck says.
Dr. Francesco Biondi, a professor in the Faculty of Human Kinetics who worked on the report with Webeck, adds, "Because it contains the word ‘intelligence,’ people often assume AI is more capable than it is. That perception can create a vicious cycle, where we trust automated systems more than we should.”
Biondi leads UWindsor’s Human Systems Lab, a cross-disciplinary research group that studies how people interact with emerging technologies. He says society is still grappling with how AI should be used. “That uncertainty tends to lead to overreliance, which becomes especially risky in defence and military scenarios."
For example, in a challenging environment such as the Arctic, where there are limited infrastructure and harsh conditions, ships have fewer backup systems. “If you are too reliant on AI, and if something critical happens and an operator is forced to take over in an emergency, their reaction is not as strong as it otherwise would be,” Webeck says.
Overreliance on automation could also allow adversaries to manipulate navigation signals or data, leaving operators unaware of subtle threats, the report says.
To reduce risks, the report recommends designing AI systems that keep human operators actively involved. This includes improving how information is displayed, ensuring operators regularly practise manual and decision-making skills, and providing clear explanations of how AI systems reach their conclusions.
The researchers say the goal of the report is to inform policy and decision-making in government and national defence organizations.
“Especially in a military context, there are significant policy and training implications with AI,” Webeck says. Her interest in the topic is shaped by professional and personal experience. She spent five years in U.S. Army counterintelligence and comes from a family with a long military tradition, including a son serving in the Canadian Navy.
“AI systems are often not designed with humans at the centre,” Webeck says. “Nobody really knows what is going on inside the ‘black box.’”
Biondi agrees: “Right now, AI is often viewed as a replacement for humans rather than as a tool that augments or assists them,” he says.
“That approach would only make sense if the technology were bulletproof. But we know AI can fail, and whenever technology isn’t perfect, having it replace humans completely is a recipe for disaster.”
To learn more, read the full report here.