Words Have Impact

AI and ChatGPT: Have We Really Thought About What’s Coming?

By Brandon N. Towl

 

The initial release of ChatGPT came in November of 2022, but there was already buzz about it (and similar AI programs) the summer before its release. And in those six months, a lot was written about ChatGPT: Its uses, its potential, its failures. Far fewer have actually asked the question “So what does this all mean?”

That’s a broad question. (Questions about meaning usually are.) Fortunately, it’s precisely the sort of question that someone with a philosophy background is comfortable rolling around with. This piece is not yet-another-piece on what ChatGPT might or might not be able to do; rather, it is my indulging in a thought experiment: What might the near-future look like in an AI-fueled business world?

For example: What roles will disappear? Which will change? What practices will vanish? Which will appear from thin air?

I won’t pretend to answer these questions in any comprehensive way. (And if you are looking for a writer to answer the question “Will my job be obsolete now?” you will be sorely disappointed—but buy me a cup of coffee sometime and I can give you my educated opinion.)

Maybe we can start thinking about these questions by looking at some concrete examples and seeing how they play out.

Playing out ChatGPT’s Uses: Four Examples

So let’s do that. Let’s assume that ChatGPT can do most of the things advertised, and that it can do them well—flawlessly, in fact. And let’s follow four people who have decided to use ChatGPT, to their immediate benefit. What happens next? What might be a likely, logical next step?

(A quick note. ChatGPT is what’s been hitting the news channels, so I use ChatGPT through most of the examples here. It sounds less generic than just saying “AI.” But the AI in question need not be ChatGPT; there are indeed many others, and many more that have yet to be developed. So, whenever you see the term, feel free to fill in your favorite AI.)

    • A marketing director uses ChatGPT to put together a basic marketing plan for a new product launch. She spends some time polishing it and adding details, but it takes her a tenth of the time it would to create such a plan from scratch. She can now use that time for other pressing tasks.
    • A job seeker is applying for a senior engineer role. Although his work speaks for itself, he decides to use ChatGPT to create a cover letter and resume template—just to ensure everything is in the right format and nothing gets left out.
    • A development team is short staffed. The lead developer has most of her staff coding, which means less time for quality control. She uses ChatGPT as an extra check on the code being produced. The AI is able to find small mistakes like a misplaced comma or a misnamed variable orders of magnitude faster than human checkers, and code can be deployed more quickly.
    • An eCommerce company has an upcoming deadline for a website launch. The one thing getting in their way is the 235 product descriptions that have to be generated for all of their products. The web team uses ChatGPT to create “first generation” product descriptions for launch, allowing their copywriter the leisure of producing more compelling copy for their most popular products later on.

    Think these are far-fetched? They are exactly the kinds of examples mentioned in articles like this one, and this one.

    So let’s assume that ChatGPT can do all these things (and more). Now, let’s work out what might happen in these scenarios.

    The Marketing Director

    The marketing director does indeed save quite a bit of time creating marketing plans—and also finds that she can use AI for things like social media calendars, persona reports, and other marketing items as well.

    …However, the CEO of the company also has access to ChatGPT. He knows marketing pretty well but hired the marketing director so she could take marketing tasks (like planning) off his plate while he could focus on other aspects of running the company. With ChatGPT, he realizes that he himself can put together those plans in record time. Pretty soon, he sees he can eliminate the marketing position. (Those few marketing functions the AI can’t do, like attend trade shows, are either outsourced or given to interns.)

    The Job Seeker

    The job seeker does indeed put together an impressive (and error-free) resume and cover letter. The thing is, the hundreds of other people applying for the job are using ChatGPT for exactly the same thing.

    …this leaves the HR person tasked with doing the first pass (and second pass) screenings in a lurch. It used to be the case that he could rule out those who didn’t follow directions, or had massive spelling errors, or simply had a resume that was hard to follow. Now he is faced with hundreds of similar-sounding resumes for the same position. He begins using random and arbitrary criteria to cut down the list of applicants. This one is cut because he has a funny, foreign-sounding name. Another is cut because she applied for the job on a Tuesday. When it comes time for interviews, it’s little different from a scenario where the finalists were chosen at random—in fact, it’s a little worse, because the HR professional’s own biases featured more prominently in the process (remember that funny, foreign-sounding name?).

    The Lead Developer

    The lead developer does manage to catch a lot of small, silly errors very quickly. Quality control improves for a time, even with the thin staff. The team comes to lean on ChatGPT more and more to find errors before testing and deployment.

    Then it happens: A complaint is lodged. An app does something unexpected. Maybe it displayed a bank account number inappropriately. Or maybe it serves up information that is inaccurate, and someone loses money over it. The problem is not that the code had a noticeable error that prevented the app from working. Rather, it had an error of intent: The app worked just fine but did something that went counter to its intended use—something the AI had no way of discerning. The company now faces a lawsuit for code that lacked appropriate oversight.

    The eCommerce Company

    The eCommerce company’s web team manages to generate all 235 product descriptions, and they turn out really well. (We can even assume that Google’s repudiation of AI-generated content is a non-issue here.) In fact, as time goes on, they feel less and less pressure to give any rewrites to the copywriter. She is eventually let go, as ChatGPT seems able to handle product descriptions, and the company is eager to cut costs.

    Furthermore, product descriptions for a number of products, sold via various channels, begin to become “standardized”—as more companies use ChatGPT for product descriptions for the same or similar products, the more that other instances of the AI use the same language, creating a kind of weird feedback loop. Pretty soon, most readers can ignore product descriptions entirely: “If you’ve read about one red dress, you’ve read about them all.” Soon, only those truly new and innovative products ever merit descriptions.

    Over time, anything that is not “truly new and innovative” is assumed to be “commoditized” once the AI has a standard description for it. Indeed, business leaders begin using this as the new yardstick for determining commoditization.

    What’s Wrong With This Future?

    So what’s wrong with this future? The CEO gets to cut costs. The HR professional uses a random algorithm (or maybe even another AI) to pick candidates (and possibly uses the time saved to get some anti-implicit bias training). The dev team either makes the argument for a bigger team, or else uses its mistakes to further sharpen their automated testing tools. The eCommerce company happily does its business, and the industry appreciates having a new measure of commoditization. Why can’t people adapt? Why not be happy with this future?

    My aim is not to make value judgments here; maybe these are good developments. But they are not small ones. That is my first point. We’re not just talking about using ChatGPT to help write reports and blog articles: We’re talking about totally upending the pool of jobs in a number of industries, and which roles remain. We’re not just using ChatGPT to write resumes, for example: We’re changing—in a massive way—how job candidates find, apply to, and get selected for jobs.

    My second point: This is not the future. This is now. People are already using AI to generate their marketing plans and content. Job applicants are using AI to fulfill job applications, and HR professionals are using AI to filter and select those applications. Coders use AI to debug code, and their colleagues over in Legal have no plan for dealing with the potential fallout. eCommerce stores are using AI to create product copy, and it’s only a matter of time before product descriptions become as standardized as textbook definitions. This is all happening now.

    If this is the future you envisioned, you’re in luck: It’s just about here. But if there’s something unsettling about the pace of that change…get ready for a shock. We’ve seen just the beginning.

Brandon N. Towl is Founder and Head Writer at Words Have Impact.

Top

CONTACT ADDRESS
7438 Carleton Ave.
University City, MO 63130

Words Have Impact

From taglines to white papers to website copy, we will help you make your messages clearer, easier to understand, and more likely to generate action.

SOCIAL MEDIA