3 Comments
User's avatar
Nick Taylor's avatar

I find the suggestion that some programmers are already trusting LLM generated code without checking it very worrying. The problems of "hallucination" are well known so why would they do that?

Expand full comment
Michael Lones's avatar

I think in the same way that some programmers previously copy-pasted sections of code from Stackoverflow without really understanding what it does. So long as it appears to work (within the limits of their testing regime) then many people are happy to follow the path of least resistance.

Expand full comment
Michael Lones's avatar

PS - the programme for this year's International Conference on Software Engineering (https://conf.researchr.org/program/icse-2024/program-icse-2024/) gives an indication of how much LLMs are permeating the field.

Expand full comment