Generative Generalization For A Generation

Is ChatGPT as awesome or as awful as everyone is saying? We explore

By Azfarul Islam

The first time I heard about “Google” was c1999; not to be confused with the more readily colloquial “googly.” Back then, an acceptable answer to “Do you browse the net?” was “No. What is that?” Imagine reading about this from your mobile supercomputer as you Google (now a verb) a googly. And then saunter down a rabbit hole of bowling techniques, ending up somewhere unprintable around 3:11am. The democratization of the world’s information was a paradigm shift. In many cases, it offers awkward tilts to places unexpected.
Recently, ChatGPT and “generative artificial intelligence” manifested in the collective consciousness. What feels like an acute onset of progress is par for the course with technology: gradually, then suddenly. Each time I wanted to write about it, there were new developments in the core technology, social-corporate landscape, or in its surprising use cases. Such changes precipitated in mere weeks, days. By the time you read this, I reserve the right to be utterly wrong. I hypothesize that whatever happens will nestle in some uncanny valley between a dystopian intelligence (because we have written far too much about these) and the competent regurgitation of today.

Like most people who experimented with ChatGPT and similar, I went through degrees of wonder, curiosity, and discomfort. Arthur C Clarke’s famously-overused quote, “Any sufficiently advanced technology is indistinguishable from magic,” played repeatedly in my head as I pondered the asymptotic nature of ChatGPT’s progress. But really, the underlying technology isn’t asymptotic at all. It is actually very average. Yes, that is a statistical pun.

Without delving too deeply into the inner workings of the technology – ironically obfuscated by its erstwhile-non-profit parent called OpenAI – I will attempt simplicity. “Large language models” (LLMs) like ChatGPT are not true intelligence, or at least not in the way humans think about intelligence. What LLMs do is recall the absolutely gigantic body of information (a corpus) that it has been fed on (trained) to generate the most probable combinations of text as responses to your prompted questions or comments, without actually understanding it. If this reminds you of predictive text from your old Nokias (or, erm, iPhones), you are not far off. LLMs are typically fine-tuned with human input for ethics and security, among other reasons.

Rather than perusing Wikipedia and summarizing articles yourself, ask ChatGPT. It will coalesce the most probable related information, averaged over other probable responses, into numerous seemingly coherent responses. Recall this isn’t intelligence, but something that figures out the most likely, and in some cases, most common permutations of text data.

For all things that operate in stochastic spaces (where there are no precise ways to determine a response to a given input: there are myriad likely responses), it is also not immune to generating utter bullshit, politely termed “hallucinations.” Why? Because those combinations are still probable; ChatGPT doesn’t care about meaning, it is optimized to provide likely text sequences. For example, when I requested words rhyming with “mother,” it actually made up a few. This is why you should always do your own research should you ask one of these LLMs to generate text. It (currently) never provides references, so good luck with academic writing.

So what can ChatGPT do? For my experiments, I challenged it to: re-create a political philosophy essay on Machiavelli and Plato from my university days, design a modernist Bangladeshi menu, make a pun-filled rap album listing, and compare itself to penicillin and the atom bomb. Multiple responses can be generated from the same prompt; small changes to a prompt can generate quite different responses. How did it fare?

The essay was milquetoast, making classic arguments, but was unable to find an interesting focal point even when asked. My original focus was on their perspectives of “virtú” and “virtue,” and also written in the shadow of the 2008 Credit Crunch. It could still be used as a reasonably good starting point.

The Bangladeshi menu repeatedly suggested a molecular gastronomy version of mango lassi, mango-based starters, a clearly Indian beef main dish, and inexplicably: matcha pudding. I am not sure if it has enough data on Bangladesh, or is pandering to biases.

The rap album (“Pork Rapling,” named by a friend) was inspired by a WhatsApp chat on pork-based puns. I wanted to see if ChatGPT could be as acerbic as our entries, which included “Spread that butt (on rice) ft Sir Mix-A-Lot,” “RUN MSG,” and “roast.me.” It rendered safe options like “Ham Hock Hop” and “Bacon Bounce,” and hesitated to use real artist names for collaborations.

The final prompt was a show of vindication from me, requiring careful dances around its ethics censors; after some massaging, the results were clickbait (and we have enough of that). Therein lies the greater potential, and risk. When admonished or corrected, ChatGPT responded with eerily docile apologies and acquiescence, almost like an echo chamber. There is more to unpack here another time.

One way to look at ChatGPT is as a mirror to a subset of human textual information (I hesitate to use the word “knowledge”). With perfectly adequate spelling and grammar, it can certainly provide middle-of-the-ground solutions to your requests.

What does this mean for all the scaremongering around jobs and the world being taken over? It is a fluid, reflexive (anthropologically) situation dependent on people’s responses. At the very least: for the majority who find writing anathema, it could be a positive paradigm shift. And don’t let my critique discourage you, try it out yourself.

For now, I wonder how much of my own writing was used to train these LLMs. As someone with a very limited online presence, almost outside the system, I might fancy myself as a future revolutionary, fighting against the forewritten tyranny of a rogue AI. But who am I kidding? This article was written on Google Docs, so my visions are already within the corpus of probability. But perhaps said AI will be blind to me by hallucinating and misspelling my name like all my teachers before me.

Azfarul Islam is a speculative writer. Not of speculative fiction, just speculative in general, on account of being a data scientist/product owner. He likes variance, especially with his food. He is aware of the irony of his initials, thank you very much

+ posts

Preserving Heritage in a Modern World

Lost in Language

Q/A with Golam Sohrab Dodul

Monument to Sustainability