SHOULD COMPANIES ENCOURAGE ALL EMPLOYEES TO USE CHATGPT?

Yes: ChatGPT is a valuable workplace tool that assists, but doesn’t replace, human ingenuity. (by Jennifer Clover, SHRM-CP).

With the use of artificial intelligence increasing in many industries, products like OpenAI’s ChatGPT are becoming the norm rather than the exception.

The number of visits to ChatGPT hit 1.6 billion in December 2023, and the platform has more than 180 million monthly users—up from 100 million in 2023. Companies would be wise to embrace the technology as a strategic tool rather than fear it as a replacement for human labor and ingenuity.

According to Jobscan’s 2023 applicant tracking system usage report survey, about 97 percent of Fortune 500 companies and 66 percent of midsize companies use AI in the form of applicant tracking software to create a simpler and more streamlined talent acquisition experience. For those companies to ban employee use of ChatGPT would be disingenuous, considering that their first contact with job candidates is already largely facilitated with ease through artificial means.

ChatGPT is an AI chatbot that uses natural language processing to create human-like dialogue. So it’s ideal for generating drafts of boilerplate text such as job descriptions, email templates, website copy, social media posts and other time-consuming writing tasks that require little innovation. It could be compared to a linguistic Roomba that widely and serviceably sweeps up, gathers and distills information, but can’t access the sentient corners of human experience.

People are, and will be, required to bridge those gaps for a long time to come. Initial content generated from ChatGPT needs human intervention to convert it into a usable final product, considering tone, brand and company culture. As ChatGPT learns what content the user desires over time, the relationship between the app and the user develops nuance, leading to more natural results.

Not just a source for icebreakers, ChatGPT can help employees have more meaningful face-to-face interactions. A brilliant conversation simulator, ChatGPT can partner with humans in role-play, allowing them to rehearse a difficult conversation or complex idea to preview how it might come across to the target audience.

Type a prompt into the ChatGPT field, such as “My employee is not completing projects on time. Can we role-play the discussion I plan to have?” or “I’m new to my organization and have been invited to lunch with leadership. Can we role-play this scenario?” The app will produce responses and begin a conversation in a safe, virtual space, building confidence and easing some of the stress leading up to a challenging situation. The key is to craft the right prompt to make the role-play exercise most effective.

ChatGPT is indispensable for HR professionals to quickly access information on a variety of mercurial federal, state and local laws, with the caveat that the results must be checked against reliable sources for accuracy. As demands increase for greater worker speed and productivity, automating research tasks with ChatGPT will become a necessity rather than a novelty.

Of the many examples of ChatGPT’s potential, perhaps the most transformative is the use of the platform as assistive technology for employees with neurodivergence and other learning differences. ChatGPT used for the enhancement of productivity and organization management has been lauded on social media by professionals with ADHD and those on the autism spectrum. On an episode of her YouTube series, author and consultant Kara Dymond interviews content creator Carole Jean Whittington, who champions the use of ChatGPT for neuro-distinct business professionals to improve success in networking, job searches and one-on-one conversations.

ChatGPT is by no means a replacement for talent, but it is a rich learning tool for employees across industries. The more it learns about us, the better it gets, and it’s not going away anytime soon.

Jennifer Clover, SHRM-CP, is an HR generalist at Family Centers in Greenwich, Conn.

No: ChatGPT can compromise sensitive data and allow unwanted bias to creep in (by Chelsea Stearns).

When we ask ChatGPT to recommend a new martini recipe, we run the risk of the drink being a bit too strong for our taste. Of course, this is easily remedied with a little more olive juice.

When we use ChatGPT in the workplace, we risk more serious consequences that can’t be as easily corrected. Corporations such as Amazon, Apple and Verizon, to name a few, aren’t willing to take related risks and have restricted their employees from using ChatGPT. Here are several reasons why:

- Confidentiality can be compromised.

Companies are always trying to protect their confidential and proprietary information, which is why they ask employees to sign nondisclosure agreements (NDAs). Research published in the Vanderbilt Law Review shows that more than one-third of the U.S. workforce has signed an NDA, which means that, by using ChatGPT, one-third of the domestic workforce is at risk of violating those NDAs and opening their employers and themselves up to potential litigation.

For example, say a software engineer who has signed an NDA puts proprietary code into ChatGPT for automated testing. The software engineer has now inadvertently violated her NDA by providing confidential information to an unauthorized third party: ChatGPT. The proprietary nature of this code is now compromised because of OpenAI’s terms of use, which indicate that content provided by users to ChatGPT conversations can be used by ChatGPT for training purposes.

This also means users can’t guarantee that the content they are using from ChatGPT is not the intellectual property, trademarked and copyrighted material of another entity, putting companies at risk of additional litigation.

- Unwarranted bias can enter in.

The personal biases of those responsible for training ChatGPT should also be considered. HR professionals should be especially wary of using ChatGPT when creating job descriptions, candidate scorecards and performance reviews, because the personal bias of the tool’s trainers can influence a company’s hiring, promotion and termination decisions, resulting in cases of disparate impact.

In a recent study, researchers from several universities and Adobe Research found significant gender biases when they asked ChatGPT to generate recommendation letters for hypothetical male and female employees. Nouns such as “expert” and “integrity” were used when describing male employees, while female employees were associated with terms such as “beauty” and “delight.”

- Quality and creativity are at risk.

ChatGPT can also reduce work quality among some employees. In a study published by Harvard Business School, professional consultants were asked to offer recommendations to the CEO of a fictional company after reviewing internal data. Even though the use of AI increased productivity and quality of work among experienced test subjects given a complex consulting task, those who overly relied on AI and blindly accepted its output produced lower-quality solutions.

“These findings raise questions regarding using AI for high-risk tasks and responsible AI, a topic that is highly debated by AI policymakers and academics,” the study’s authors wrote.

- Security risks are also a potential danger of using AI.

Research from security firm LayerX found that 6 percent of the 10,000 employees it followed for a study have put sensitive data into ChatGPT/GenAI, with the primary offenders being those in research and development, sales and marketing, and finance departments.

SHRM