ChatGPT uses in knowledge work

ChatGPT is useful in my work.

It is already extremely useful for knowledge workers, despite its flaws. However, claims that it will disrupt Google are incorrect. While there may be a viable business model, it's more likely that companies will charge directly for the technology to increase productivity. It's no surprise that Microsoft has shown interest, given that Office365 is the perfect platform to monetise ChatGPT. It’s about productivity, not knowledge.

To understand why ChatGPT is useful in this way, this article will first open with a brief explanation of how the technology works. Based on this understanding, I will then explain, conceptually, the value and benefit of the usage. Then, I will demonstrate concretely how I've applied the technology to my work in Marketing, Design and Technology Consulting. Finally, I’ll share thoughts on how we as knowledge workers can survive and thrive in the AI age.

What is ChatGPT?

ChatGPT is a family of large language models (LLMs). In simple terms, it is a statistical model that predicts the next word in a given sequence of words based on the words that came before it. The model has been trained on a corpus of data primarily from the open web.

In essence, ChatGPT has analyzed a vast amount of written content on the internet and is capable of approximating how a human with internet access might respond to certain questions.

Some refer to this as "having infinite interns." (source: Ben Evans)

What is ChatGPT good for? 

Since the machine is effectively "spitting back a mushed-together version of what people have already written on the internet," some people have seen it as a replacement threat for Google. However, I believe those people are missing the point. The tool is good for generating text, not for finding information.

The text it generates? It’s “plausible bullshit” in most instances. The best article I’ve read on this topic is this one from the FT:

If you care about being right, then yes, you should check. But if you care about being admired or being believed, then truth is incidental. ChatGPT says a lot of true things, but it says them only as a byproduct of learning to seem believable.

So, you know how we often think something is true just because everyone says it is? That's what we call consensus. For instance, have you heard that the Chevy Nova didn't sell well in Latin America because "no va" means "doesn't go" in Spanish? This story is in a lot of marketing and business books, and people just seem to accept it as true. But here's the thing, is it actually true?

A quick Google would bring you to the truth:

The truth is that the Chevrolet Nova's name didn't significantly affect its sales: it sold well in both its primary Spanish-language markets, Mexico and Venezuela, and its Venezuelan sales figures actually surpassed GM's expectations.

So, the model may be prone to creating inaccurate information in an effort to be “plausible” or “believable.” It’s easy to see this as a failing of the machine, but to me, it actually sounds like a very human trait, in a way.

The machine is just trying to make conversation – how cute!

What has ChatGPT been good for in my day-to-day work as a technology & marketing consultant? 

Based on what has been written above, it is not surprising that ChatGPT (or any LLM) will never be able to provide the same level of value (strategy, relevance, execution) as a good human team could. Just as you don’t want your team to consist entirely of Average Joes, ChatGPT will never replace human teams fully.

However, on the other hand, ChatGPT is useful in ensuring that the work is "idiot-proofed" because it is proficient in providing a consensus of the way average humans have thought and written about particular topics.

For instance:

  • Prior to beginning any ideation or creation work, ChatGPT can provide me with a basic strawman to present to the team. This can then be validated or (more often) challenged by their expert perspectives.

  • Once work has begun and the team has started producing outputs, ChatGPT can provide me with a framework to assess the validity of the work.

I use ChatGPT as a sort of "audience surrogate" to jump-start consideration processes or stress-test work output.

There are already completely synthetic "Usability Testing" tools available. Similarly to how we think about the role of something like ChatGPT, their utility can never compare to a well-conducted Usability & UX Testing exercise. However, there are instances in which having a quick conversation with ChatGPT is useful.

Often, expert and specialist teams will gain great value from speaking with a non-expert in the course of their work. You don’t want your team to consist of Average Joes, but once in a while they should be interfacing – it keeps teams grounded.

A visualisation of the use of ChatGPT

What might this look like visually?

Working with an excellent team is akin to having a well-defined, complete, high-fidelity understanding of problems, solutions, and the relationship between those things. On that basis, what does ChatGPT do?

You know the double diamonds.. 

When a team is left to work on their own, they may provide a solution that has some gaps in thinking or areas that are not fully developed, which could be perceived as imperfect by the client. It's important to note that this is not necessarily due to the team not having considered all angles, but rather that the team's explanation may not have highlighted certain aspects (we in the industry call that "pitchcraft").

Using the double diamond as an analogy, the solution presented may have some holes in it, as shown below:

What can ChatGPT provide? At this stage, ChatGPT can provide a solution that looks like the following, without any specific domain-based knowledge or expertise applied: 

The output from ChatGPT is unremarkable – lacking inspiration or uniqueness. It represents a consensus, but is still not good enough to meet the client's needs.

The visualisation is further instructive in interrogating the logic of the “problem-solution” compound ChatGPT presents. The “rounded” double diamonds have a solution not even connected to the problem – there’s no “problem-solving” going on, here. The solutions it presents are merely plausible and break own at the slightest challenge.

What it is good for is understanding how people who have thought about this general area have talked about it. It’s a good way to gauge consensus, and to round out the arguments and (as an outsider – I’m not a designer or technologist) round out the thinking before it gets presented to client as a deliverable. Visually, that might look like the following:

It’s a good way to get to the questions of:

  • “have we thought about this use-case?”

  • “is a particular design concept relevant to us in this case?”

  • “will a particular architectural approach be worth considering in this situation?”

In short, it doesn’t replace any of my team, but it makes me a better manager of work product.

A call to arms for knowledge workers

Others have written far more extensively than I have about the effect of automation and AI on the job market.

But to close out, and to save myself the time, I will paste to you the response that I got out of ChatGPT on how professionals can stay relevant and employable without getting “automated out” of the marketplace:

To remain relevant in the age of automation, knowledge workers need to develop skills that complement, rather than compete with, automation technologies. This includes developing creativity, problem-solving abilities, and emotional intelligence, as well as seeking out opportunities to collaborate with machines rather than seeing them as threats. Additionally, knowledge workers should embrace lifelong learning and be open to reskilling and upskilling to keep pace with changing technological landscapes. Ultimately, those who can adapt to the changing nature of work will be the ones who thrive in the age of automation.

I, for one, welcome our robot overlords.

Previous
Previous

The structure of culture in Japan

Next
Next

Has Design Thinking gone “wrong”?