There are valid concerns about using generative AI at work. But it’s helpful to consider how it can empower our human potential.
Generative artificial intelligence (AI) is changing how we work.
According to a Fishbowl by Glassdoor survey, by late January 2023, 43 percent of professionals were using AI tools such as ChatGPT, a popular generative AI technology, for work-related tasks. Fifty-seven percent of employees who’ve used ChatGPT at work report that it boosts their productivity, according to a TalentLMS survey. From an HR standpoint, the market for generative AI is expected to be worth over $1.5 billion by 2032.
It’s not hard to understand why generative AI is having an impact. Some of today’s most popular generative AI tools are free, user-friendly and easy to access. Any HR practitioner or employee with an internet connection can use them to complete tasks, be more creative and satiate their curiosity. But along with the productivity and educational benefits generative AI offers, concerns have arisen, especially in the workplace.
Before we dive in, let’s define our terms.
What is generative AI?
Generative AI is capable of creating content, including text, images, audio or video, when prompted by a user. Generative AI tools generate responses using algorithms often trained on open-source information, such as text and images from the internet. These systems are not cognitive and do not possess human judgment.
“Generative AI is the next phase in data, AI and machine learning coming together,” says Jack Berkowitz, chief data officer, ADP. “It’s based on this model that can predict the next word. It almost feels like magic.”
For today’s employees and HR practitioners, this means getting used to having generative AI as a coworker.
Generative AI as a coworker
You might be wondering: What’s it like having generative AI as a coworker? You may know the answer if you’ve used generative AI at work. Still, for those who don’t know, it’s like having a knowledgeable and articulate personal assistant, albeit one who might say dubious and offensive things.
“It’s like the coworker who knows a lot, who you’re pretty sure is smarter than you are, but who also tends to get really unfocused and has the potential to say things that might be a bit off — so you need to keep watch,” says Helena Almeida, vice president, managing counsel, ADP. “You have to make sure you’re using it in the right way, but it’s a great help.”
Empowering human potential
There are valid concerns about generative AI and its impact on the workplace and labor market. But it’s also helpful to consider how generative AI can empower human potential at work today.
For example, generative AI can help HR practitioners and employees be more creative.
“We all have ways that we think about issues and problems. Generative AI has the potential to help us break through our own thinking about things and open up creativity around how we solve problems at work,” Almeida says.
It can also help all employees, HR practitioners included, realize their potential and do more.
“Lots of people talk about the risks,” says Jason Albert, global chief privacy officer, ADP. “It has risks, but its ability to help people succeed and achieve more, that’s what I’m really excited about.”
In other words, generative AI is a technology created by people that can ultimately be designed for people.
“Our role at ADP is to help companies flourish and help their employees flourish,” Albert says. “That’s where this goes over time: making the technology better for people.”
Bias in AI remains a workplace problem
According to the U.S. Equal Employment Opportunity Commission (EEOC), it is illegal to discriminate against an applicant or employee because of their race, color, religion, sex (including gender identity, sexual orientation and pregnancy), national origin, age (40 or older), disability or genetic information. These requirements aren’t newsworthy, and many business leaders, HR practitioners and employees understand them. What remains newsworthy, however, is AI’s potential to perpetuate — or mitigate — discrimination in hiring and employment decisions.
Bottom line: Discrimination is illegal, regardless of whether a human or system discriminates.
“There are concerns that if AI ‘sees’ that the last five successful candidates were white men, then it will ‘learn’ that a successful candidate is a white man,” Almeida says. “The advantage we have at ADP when we’re talking about an AI algorithm is that we have data scientists focused on what data is going into the algorithms that are going to evaluate candidates. How do we monitor the algorithms to make sure the AI’s not learning what we don’t want it to learn? There are always steps being taken to make sure this bias doesn’t seep in. When we’re talking about AI, we have advantages that help us be better and make better decisions about bias.”
AI’s potential for reducing workplace bias
While AI may lead to discriminatory outcomes, it may also help reduce them.
“Not only can we focus on AI from a data input, quality and monitoring standpoint, but AI can also help drive down bias because it can take the information and check for these things in a way that is harder to do for humans because we’re fallible,” Albert says.
AI can even help with discriminatory practices like pay inequity.
“Already behind the scenes of what makes ADP’s pay equity technology work is a lot of AI,” Berkowitz says. “Instead of having to go and trundle through a bunch of information, you get simple questions and answers. AI is lifting our learnings about pay equity to make them more understandable.”
Despite AI’s efficiency and benefits, monitoring for discriminatory practices should remain the responsibility of a human.
“Whether in recruiting or anything else, the person should always be in control. The locus of control should be the person, not the AI,” Berkowitz says. “AI’s an advisor, but still, it’s your decision. It’s up to you to decide how you want to take its recommendation and use it. Being focused on that partnership is the essential point. We don’t want to take control away from the person.”
How will generative AI change the workplace?
Generative AI has already changed the workplace. It helps HR practitioners and employees be smarter, more productive and more creative every day. But its evolution isn’t finished. For example, it will get better at assisting HR practitioners and employees with tasks.
“We’re going to see it go in a couple of interesting paths next,” Albert says. “There will be these levels of automation associated with it that none of us can really anticipate today, this notion of not only recommending a trip but booking that trip, of notifying your hotel that you’re going to be late, of rebooking your rental car, of booking a dinner at an airport. These types of things will happen automatically. I’m excited about that.”
Personalization will also increase.
“When you use one of these models, they’re stunning, but they’re also generic,” Albert says. There’s this area of prompt engineering where you write a long request, but you get what you want. There’s going to be growth in this area. The answers you get will be more specific to you. It’s the personalization that will really take charge here.”
Generative AI at work: Guardrails, transparency
As AI evolves, new opportunities arise, but so does the potential for unintended consequences. These new tools must be used in a way that is ethical, secure and compliant. Organizations are already developing guidelines to ensure interactions with generative AI tools are done with security and privacy guardrails in place.
“With new technologies like generative AI, people aren’t going to use it if they don’t trust it,” Almeida says. “Making sure that the product is compliant will give people the trust they need to feel comfortable using the product. All of that is education. Part of that trust will come from educating people on all the steps companies are taking to protect their data and to make sure the AI is working as it should.”
The immediate solution? Enlist the help of a trusted partner.
“It’s important to work with a vendor that enables you to meet your compliance obligations in this area,” Berkowitz says. “At ADP, we’ve added a human loop into our process, which means a human is ensuring high data quality and checking the output the AI generates.”
ADP has also adopted rigorous principles and processes to govern its use of generative AI and AI in general.
Find out more:
AI and Data Ethics: Accountability and Transparency
AI and Data Ethics: 5 Principles to Consider
AI and Data Ethics: Data Governance
This article originally appeared on SPARK powered by ADP.