
It’s a phrase we use a lot in our community, “Drink the Kool-Aid”, meaning becoming unreasonably infatuated with a dubious idea, technology, or company. It has its origins in 1960s psychedelia, but given that it’s popularly associated with the mass suicide of the followers of Jim Jones in Guyana, perhaps we should find something else. In the sense we use it though, it has been flowing liberally of late with respect to AI, and the hype surrounding it. This series has attempted to peer behind that hype, first by examining the motives behind all that metaphorical Kool-Aid drinking, and then by demonstrating a simple example where the technology does something useful that’s hard to do another way. In that last piece we touched upon perhaps the thing that Hackaday readers should find most interesting, we saw the LLM’s possibility as a universal API for useful functions.
It’s Not What An LLM Can Make, It’s What It Can Do
When we program, we use functions all the time. In most programming languages they are built into the language or they can be user-defined. They encapsulate a piece of code that does something, so it can be repeatedly called. Life without them on an 8-bit microcomputer was painful, with many GOTO statements required to make something similar happen. It’s no accident then that when looking at an LLM as a sentiment analysis tool in the previous article I used a function GetSentimentAnalysis(subject,text) to describe what I wanted to do. The LLM’s processing capacity was a good fit to my task in hand, so I used it as the engine behind my function, taking a piece of text and a subject, and returning an integer representing sentiment. The word “do” encapsulates the point of this article, that maybe the hype has got it wrong in being all about what an LLM can make. Instead it should be all about what it can do. The people thinking they’ve struck gold because they can churn out content slop or make it send emails are missing this.

So we have an LLM, even a small one on our own computer, and looking at it in that light it’s immediately apparent that it can become a function to do almost any processing task, if you wrap the right prompt and API call in a function definition. Of course that’s dangerous, because if I may I would like to coin a new phrase: function slop.
As an example I can call an LLM to do simple numerical addition and it will perform the task, but doing so would be utterly pointless given the existence of the + operator. If you are going to use an LLM to perform a processing function it’s important that it be a function where doing so makes sense, otherwise your function is just function slop. A quick web search tells me that function slop is not yet a thing, so I would like to take this moment to apologise for what I may have unleashed upon the world.
Function slop aside though, using the LLM to do a processing task where it makes sense, shouldn’t be ignored as a useful tool. These things are very good at summarising and categorising information in the way a human might do it, a task that’s often hard in traditional programming, so if the job in hand fits those capabilities then it makes sense to use them.
This has been a three-part series, and unlike Star Wars or The Hitchhikers Guide To The Galaxy, it’s probably going to stay that way. I hope that in our explanation we’ve successfully looked beyond the hype and found something useful in all this. It’s odd though, as the one writing it you might think I would be bubbling over with new ideas, but aside from the previous article’s sentiment analysis I still find myself with not much I find the need to use an LLM for. Which is maybe the point, it’s one thing to know a bit about them, but just because they’re there doesn’t mean you have to use them.
