Alternative title: “the dark side of integrating a LLM (large language model) in your software”. You run into several challenges. He illustrates it with https://www.learntail.com/ , something he helped build. It creates quizes from text to make the reader more active.
What he used was the python library langchain to connect his app with a LLM. A handy trick: you can have it send extra format instructions to chatgpt based on a pydantic model. If it works, it works. But if you don’t get proper json back, it crashes.
Some more challenges:
There is a limit on prompt length. If it gets too long, the LLM won’t fully understand it anymore and ignore some of the instructions.
A LLM is no human being. So “hard” or “easy” don’t mean anything. You have to be more machine-explicit, like “quiz without jargon”.
The longest answer it provides is often the correct one. Because the data it has been trained on often has the longest one as the correct answer…
Limits are hard to predict. The token limit is input + output, so you basically have to know beforehand how many tokens the AI needs for its output.
Rate limiting is an issue. If you start chunking, for instance.
A LLM is not a proper API.
You need to do syntax checking on the answer.
Are all the fields present? Validation.
Are the answers of the right type (float/string/etc).
And hey, you can still write code yourself. You don’t have to ask the LLM everything, you can just do the work yourself, too. An open question is whether developers will start to depend too much on LLMs.
My name is Reinout van Rees and I work a lot with Python (programming language) and Django (website framework). I live in The Netherlands and I'm happily married to Annie van Rees-Kooiman.
Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):