(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE).
Kristian Kersting is professor of AI in Germany. (And some other things, he started hiding behind a banner halfway the introduction :-) He’s got a central question in his talk: does size really matter in AI?
Is there something like “reasonable artificial intelligence”?
AI is not just data. AI is not just a model. He thinks it is more a “cooking recipe” for learning, for reading, for composing. When used responsibly, AI has the potential to help tackle some of the most pressing challenges across multiple domains. But… partially this potential seems to be due to bigger is better. The “scaling hypothesis”.
He showed some German initiatives to have German-language LLMs. And also some European initiatives, as the Big Tech monoculture is a problem due to high costs and lack of expertise. According to him, AI depends on open source. He also mentioned that it doesn’t have to be just isolated European: we should co-operate world wide.
A quote he showed: “with petabytes of data, you could say correlation is enough, we can stop looking for models”. AI is exciting and bigger sometimes is better, but unfortunately scaling may not yield reasonable results.
People sometimes say you shouldn’t learn programming anymore as AI will take over the field, But there are many studies that show problems with this. Bugs, dead ends, weird code. The more complex, the more errors. Likewise warehouse management. Simple tasks might be OK, but the more complex it gets, the worse it becomes. LLMs cannot plan - but they can help planning.
Karl Popper: “all opservations and actions are hypothesis-driven”. So you should have an idea of what to look for in order to find something useful. Bigger is better might be valuable in the short term, but in the long term AI needs new paradigms. Just brute force slurp-in-the-data has its limits.
Think about it as a software engineer? Reasonable, composable, together, open? (Deep) learning + (symbolic) reasoning + software engineering + AI in the loop? Several components combined together? That way we might be able to steer+understand what is happening and what should happen.
Kahnemann’s book “thinking, fast and slow” shows two ways we think as humans. Quick, based on automatisms and experience. Slow, based on deliberate reasoning.
Watch out with bias. Cultural bias for instance. Some is better to remove, some is better to put in. How much moral values should be in your model? What is moral, what is cultural, what are unfair preconceptions?
Empowering fairness, inclusion and innovation is integral part of the research in reasonable AI. Partially it is an EU rule when researching it in Europe. We need more open AI research. (But we might need a bit of public-private partnerships, he thinks). Open source is not optional, it is foundational to AI.
Photo explanation: picture from the recent “on traxs” model railway exhibition in Utrecht (NL)
My name is Reinout van Rees and I program in Python, I live in the Netherlands, I cycle recumbent bikes and I have a model railway.
Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):