The Conceptual knowledge worker.
I was watching a documentary last week about the conceptual artist Jeff Koons. The guy who made ‘Balloon Dog’, which sold for about $100m. I am not a huge fan of Koons the man. I can’t tell if he is a troll or sincere; either way… But what jumped out at me was his comment that ‘you wouldn’t expect an architect to build the house’. You see, Koons comes up with the idea for an art piece but has no part in producing it. He is a conceptual artist.
If we don’t require an artist to make the art, should we expect a knowledge worker to write a report?
It is not a direct parallel but an interesting point to explore. I am a bit of a generalist when it comes to being an economist. My strengths lie in creativity and the ability to connect ideas across disciplines and worldviews. In other words, my strengths are conceptual rather than intellectual (in the IQ sense anyway).
There have long been parallels between the conceptual artist and the conceptual knowledge worker. One that comes to mind is the esteemed professor who runs a lab. She directs research but may not conduct research. Yet, she will be featured as the first author of a journal paper that she may never have touched.
I find this topic fascinating because we are now squarely in the AI age of the Large Language Model (LLM). Anyone who has kept up with these LLMs will have noticed they are getting good – like, really good. I would not let a LLM write this article as it would not have my own ‘voice’, and I want this to have my voice. In fact, I did not touch AI for this article as it would have diminished my experience. An AI would use ‘better’ grammar than me; however, I learned long ago teaching English in Japan that ‘better’, in the technical sense, does not equate to better at communicating ideas. However, when I write a journal paper, it does not have my voice. I write it in a way that conforms to a standard that makes it very easy for an LLM to replicate to a high level of quality.
For me, the value of research is formulating hypotheses and gathering data. The process is crucial, and so is the communication of ideas. However, while AI cannot do the research, I have been wondering if, when the writing style needs to be formulaic, is anything lost in having AI compile a first draft? In this sense, AI is an assistant, and the knowledge worker is a creative director and editor.
Look at this quote from Koons:
“When you have an idea for a work and when you've finished your model for it, for the artist it's almost complete, in a way. But then bringing it to the finish is really something you do for the audience.”
Does this apply to a knowledge worker?
Once the research is complete, is a LLM an acceptable means, with expert direction, to finish the work, i.e. do the writing? For some people, the answer is no. They will feel something is lost; however, as the quality of LLMs progresses, what is lost seems to be diminishing rapidly. I like to write, so there is that. But I don’t particularly enjoy writing proposals or journal papers – they are too formulaic for me to enjoy.
The conceptual knowledge worker mirrors the delegation seen in conceptual art, where creative intent remains with the original thinker, but execution relies on technical capabilities residing elsewhere.
By spending less time on what I consider ‘administrative’ writing, I have freed up a lot of time to put into the work that matters most to me. Formulating new methods, talking with communities, developing hypotheses, creating research designs, etc. This quote from LeWitt resonates.
In conceptual art the idea or concept is the most important aspect of the work. When an artist uses a conceptual form of art, it means that all of the planning and decisions are made beforehand and the execution is a perfunctory affair.
LeWitt, ‘Paragraphs on Conceptual Art’, Artforum Vol.5, no. 10, Summer 1967, pp. 79-83
Of course, some significant issues still need to be addressed with AI, which I won’t explore here in depth. Determining intellectual ownership, accountability, and responsibility is essential. There are also deeper issues around loss of intellectual authenticity, skill erosion and dependency, ethical concerns, originality concerns, and many more.
I am not overly concerned about originality, responsibility, skill, or authenticity when using LLMs. The ideas and concepts I feed into AI are my own, and I am entirely responsible for any content I put out into the world. If AI hallucinates and invents references or facts, accountability is mine, as I did a poor job as a director and editor. This is no different than citing a lousy study in a literature review.
I feel most uneasy around the topic of job displacement by AI. This is a serious issue, but not so much for knowledge workers in wealthy countries – the audience for this article. Could a professional gardener use AI to produce a good report for an insurance company outlining their strategic direction for the next two years? I think the answer is no and will be for a long time.
Without deep domain expertise, there is no chance that the output from AI would hold any value. LLMs act as a co-pilot. They are great at organising content but can’t function independently of a skilled human. Will they reduce the number of experts required in specific industries? Undoubtedly yes. The boost to productivity means that fewer people will be required to manage tasks. Will knowledge workers have more leisure time? Fat chance; when has that ever happened with the advent of new technology?
A study by Boston Consulting Group found that 748 consultants across a skills distribution benefited significantly from having AI augmentation, with those below the average performance threshold increasing by 43% and those above rising by 17% compared to their control scores. However, they also caution that:
The capabilities of AI create a “jagged technological frontier” where some tasks are easily done by AI, while others, though seemingly similar in difficulty level, are outside the current capability of AI.
Dell'Acqua, F., McFowland III, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., ... & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper, (24-013).
AI cannot replace knowledge workers, at least in its current state. This brings me to the skills required to succeed as a knowledge worker. Since we (my LinkedIn people and I) all have relatively similar access to these powerful tools, we must distinguish ourselves elsewhere.
If I were to pick the key skills for knowledge workers over the next few years, they would be emotional, cultural, and social intelligence, combined with a broad understanding of multiple interrelated concepts, from technical knowledge like coding to theoretical realms like philosophy. A wide knowledge base can be amplified through AI to generate powerful new insights. Knowledge workers' career trajectories will emphasise higher-order cognitive, relational, and strategic skills. AI augmentation will likely redefine their roles, highlighting distinctly human skills such as emotional intelligence, critical thinking, interdisciplinary expertise, and strategic innovation.
For a long time now, an author's success has been much more closely correlated with his ability to build an audience and promote his work than it has been about the quality of his writing. I see the same shift happening in academia. I don’t celebrate this shift; my fellow introverted scientists would likely agree. I have noticed many knowledge worker friends who are just as competent as me or, more so, struggle recently. Not because their work is not great but because they have not adapted to a rapidly changing work context. I also find these adaptions uncomfortable. I am now performing roles that go against my introverted tendencies. However, discomfort is not a good excuse for avoidance. If it were, I wouldn’t be running ultra-marathons.
I am not promoting this new direction for knowledge work nor strongly opposing it – I think, like most large technological innovations, that ship has sailed. I believe the knowledge workers who will do best in the LLM age are the ones who can relate to their work on a human level. They will be people who can deeply understand a client's needs or a research topic and use the tools of AI to deliver the best results. They will have emotional, social, and cultural intelligence and act as a conduit between the technical and human skills required to produce the best result. Who knows, maybe the humanities will rise from the STEM-ravaged ashes to become the pathway to career success.