aau/MüllerAlthough artificial intelligence can handle straightforward tasks in software development, we particularly need well-trained junior developers right now, argues Martin Pinzger, Professor of Software Engineering at the University of Klagenfurt. In this interview, he explains why large language models stagnate without fresh human input – and how they can nevertheless add real value to software development.
Martin Pinzger, do we still need software developers if AI can do the job just as well?
I’m rather sceptical in this regard. Recent scientific studies, as well as my own experience with AI, show that it performs very well and fast on simple, repetitive programming tasks — and in these cases one can mostly rely on it. However, AI is essentially a vast probabilistic model, which means there is always a degree of uncertainty. That uncertainty can lead to failures in software systems, and we simply cannot afford that – particularly in business-critical systems that must operate correctly.
What about more demanding tasks?
Most software projects involve substantial challenges: complex algorithms, business models with diverse and sometimes conflicting requirements, and dependencies on other systems and third-party components that must be taken into account during development. AI cannot fully address these challenges, or can only do so inadequately. That is why we still need people who can critically assess AI-generated results and decide which parts of a proposed solution can be adopted – and how. Only someone with sufficient experience and a solid understanding of how complex software systems are developed can do that. This raises a key question: how does someone gain that experience? How does one progress from being a supposedly redundant junior developer to a highly sought-after senior software engineer?
Does this mean that you still see opportunities for young software developers on the job market?
Absolutely. The more AI is used in software development, the greater the need for well-educated software developers.
How are your students navigating this challenging situation?
The need to progress from junior to senior developer is reflected in university education as well. Although AI can help solving simpler tasks often done by junior developers, its benefits can be misleading. We urgently need the learning effect that comes from working through straightforward problems, because that is what enables students to tackle more complex challenges later on. Junior developers need practice because AI will not become an expert any time soon. And meaningful practice requires perseverance and the ability to concentrate – two qualities that are increasingly under pressure today, not least because of the constant distractions of social media.
AI generates results based on existing knowledge. Can it ever develop genuinely new software – perhaps even in a creative way?
Today’s large language models, which we also use in software development, are essentially enormous neural networks with billions of parameters. When given a prompt, they calculate which word – or token – is most likely to come next. That is why this type of system is referred to as generative AI. It depends entirely on the data it was trained on and the prompt provided by the user. In software development, the training data comes from countless websites and developer forums. If people stop contributing new ideas and knowledge there, AI’s innovative potential will inevitably stall. Even when today’s AI happens to produce something novel, humans are still needed to verify whether the result is actually useful and functional. It becomes particularly problematic if future models are trained on unverified, AI-generated content. At that point, AI is effectively swimming in its own soup.
Do we still need new software at all?
Without question. Our staff and graduates develop software for real-world processes. And the world – and therefore the requirements placed on software – is constantly evolving. Therefore, software developers need analytical skills and creativity. The image of a lone programmer sitting in a basement fuelled by coffee and building software single-handedly is long outdated. Modern software development is collaborative. Today’s developers require not only analytical and technical expertise, but also strong teamwork and communication skills to design and build complex systems together. I am convinced that the success of a software project ultimately depends not on technology, but on people.
How do you personally use AI in your day-to-day research?
I work with coding agents and use them as sparring partners, trainees, or enhanced search tools. They are particularly useful for tedious tasks. However, I remain in control: I decide whether something is correct. I determine the direction of development. I guide the AI until it produces a satisfactory answer. If it cannot arrive at a suitable result within a reasonable amount of time or effort, I revert to conventional methods. Another practical issue is that chatbots and agents carry the entire conversation context forward, which often means you receive the same flawed answers repeatedly. In such cases, starting a new session can help achieve the desired result.
What are these agents capable of?
While I believe that generative models such as those behind ChatGPT have largely reached their current limits, I still see development potential in agents. An agent has a defined goal and an explicit strategy for achieving it. It can also be provided with tools to carry out specific tasks. In other words, it does not merely search through a vast space of probable solutions; it can deterministically execute intermediate steps using appropriate tools. That sounds promising. However, how a tool is selected and how its results are incorporated into the overall solution once again is decided by the AI. With more complex tasks, agents often end up going round in circles or losing direction, and the user needs to stop them before they consume too many resources without making progress.
When does AI work well for your work?
Obviously, the more precisely I instruct the AI, the better the answers will be. If the AI becomes almost exclusively a translator between my language and the programming language, it can perform well. This is because the architecture behind generative models involves so-called transformer models, which were primarily developed for translating texts. To do this, they do not need to add any knowledge, but rather simply translate the words and phrases into the other language. Great progress has been made in this area.
The media often suggests that AI is becoming ever more intelligent. Does that not apply to large language models?
On the contrary, there is a risk that the models may become increasingly simplistic. These large neural networks typically select the most probable next token and only rarely choose alternatives. If systems consistently generate only the most probable outputs, solutions become increasingly uniform and conventional. This is particularly evident when AI is used to generate or edit texts: it tends to produce the same phrases repeatedly. Future models then learn from such outputs, reinforcing the effect. The second- or third-most probable options are used less and less by the so-trained models. In the end, what remains are standardised solutions.
What would remain then?
(laughs) Perhaps one day the answer to every question posed to ChatGPT will simply be “42”.
Can you explain that for non-computer scientists?
Certainly. In Douglas Adams’s science fiction novel The Hitchhiker’s Guide to the Galaxy, a supercomputer is asked to calculate the answer to the ultimate question of life, the universe and everything. It computes for a very long time and finally announces that the answer is 42. In other words, the answer to all questions is ‘42’. Because this reference appears so frequently in online texts, it is highly likely that large language models will reproduce it. However, that answer is of little use when developing software systems.
.flex_column.av-r8lxea0-560b684391765c4ae03a7d49b5afa010{
border-radius:0px 0px 0px 0px;
padding:25px 25px 25px 25px;
}
#top .av-special-heading.av-jxysg94-b20352fc20b3b382be5170722c10dbde{
margin:0px 0px 0px 0px;
padding-bottom:0;
}
body .av-special-heading.av-jxysg94-b20352fc20b3b382be5170722c10dbde .av-special-heading-tag .heading-char{
font-size:25px;
}
.av-special-heading.av-jxysg94-b20352fc20b3b382be5170722c10dbde .av-subheading{
font-size:15px;
}
About the person
Martin Pinzger joined the Department of Informatics Systems at the University of Klagenfurt in 2013, and is currently the head of the department. He studied computer science at the Vienna University of Technology, where he completed his doctorate in 2005. Martin Pinzger then worked as a research assistant at the University of Zurich and moved to Delft University of Technology in the Netherlands in 2008 as an assistant professor. His research focuses on AI for software engineering, mining software repositories, software testing, program analysis, and software visualisation. Martin Pinzger currently heads the FWF-funded project ‘SemImpact: Semantic Change Impact Analysis for Microservice-Based Systems’ and the FFG-funded project ‘Software Engineering Approaches for Evolving Systems’. In 2025, a paper on ChangeDistiller was recognised by the journal IEEE Transactions on Software Engineering as one of the most influential papers of the fourth decade.
Der Beitrag Artificial Intelligence in Software Development: “AI Is Swimming in Its Own Soup.” erschien zuerst auf University of Klagenfurt.
