Is generative AI making us more productive, or less connected to our work?

A head and shoulders photo of Dr. Esraa AbdelhalimOdette School of Business professor Dr. Esraa Abdelhalim has received a Social Sciences and Humanities Research Council Insight Development Grant to study how generative artificial intelligence can be used to enhance workplace performance without undermining motivation or personal agency. (DAVE GAUTHIER/University of Windsor)

By Victor Romao 

For most workers, the appeal of generative AI is obvious: faster drafts, quicker analysis, less time on the routine parts of the job.  

But Odette School of Business professor Dr. Esraa Abdelhalim is asking a harder question — what does it cost when the parts you hand off are the parts that make the work feel like yours? 

Funded by a Social Sciences and Humanities Research Council Insight Development Grant, Abdelhalim’s research looks beyond the now familiar promises of speed and efficiency.  

Instead, it asks what happens psychologically when tools like ChatGPT become embedded in everyday work. 

“Most conversations around generative AI focus on output quality, speed and efficiency,” said Abdelhalim. “But work is also a psychological experience. People don’t simply want to finish tasks faster; they want to feel competent and valuable in what they do.” 

As generative AI tools increasingly draft text, analyze data and offer decision support, Abdelhalim’s research examines whether performance gains come at a cost.  

One risk, she suggests, is that overreliance on AI could subtly shift how people see their own abilities. 

“When people begin deferring thinking, writing or problem-solving to AI, they may move from being active contributors to passive reviewers,” said Abdelhalim. “Over time, this can weaken confidence in their own capabilities. Even if performance looks strong on the surface, individuals may feel less mastery and less pride in the outcome.” 

That tension is especially pronounced in tasks that form the core of professional identity.  

Abdelhalim points to cognitively demanding work such as creative problem-solving, judgment-heavy analysis and communication tasks — areas where expertise has traditionally been demonstrated and refined over time. 

“The balance becomes fragile when AI takes over tasks that are central to how people demonstrate skill and judgment,” she said. “It’s also fragile in environments where polished AI outputs and speed are implicitly rewarded over depth and engagement.” 

Rather than framing generative AI as inherently harmful or beneficial, Abdelhalim’s work aims to define the conditions under which the technology supports meaningful work — and when it risks eroding it.  

A key focus is distinguishing between AI as a collaborator versus AI as a substitute. 

“Meaningful work is more likely to be supported when AI helps people extend their thinking, learn faster or reduce routine burden, while preserving human judgment and authorship,” she said.  

“Meaning can begin to diminish when AI takes over the most intellectually or personally rewarding parts of a job.” 

Those insights have implications well beyond individual employees.  

For organizational leaders, Abdelhalim argues that AI adoption decisions should involve more than a narrow efficiency calculation. 

“Leaders should ask why they are using AI and whether the gains are worth the disruption,” she said. “An efficiency-focused deployment may look successful in the short term but carry longer-term risks related to autonomy, capability development and trust.” 

The research also speaks to educators and policymakers grappling with how to prepare people for AI-enabled work without encouraging dependency. Abdelhalim sees a need to shift the conversation from adoption to awareness. 

“For educators, that means designing learning experiences that preserve critical thinking and original problem-solving rather than allowing AI to do most of the work,” she said. “For policymakers, it means supporting responsible AI frameworks that protect human capability development over time.” 

Ultimately, Abdelhalim hopes her research encourages organizations to take a more deliberate approach to generative AI — one that recognizes context matters. 

“Successful AI integration isn’t just about improving performance,” she said. “It’s about fitting with the organization, the team and the objectives. Blindly adopting AI because of pressure or trends can lead to wasted resources and, paradoxically, decreased productivity.” 

Strategic Priority: 
Academic Area: