saying that anything AI generated in the kernel is a problem in itself is bullshit.
I never said that.
Same with human generated code. AI bug are not magically more creative than human bugs. If the code is not readable/doesn’t follow conventions you reject it regardless of what generated it.
You may think that, but preliminary controlled studies do show that more security vulns appear in code written by a programmer who used an AI assistant: https://dl.acm.org/doi/10.1145/3576915.3623157
More research is needed of course, but I imagine that because humans are capable of more sophisticated reasoning than LLMs, the process of a human writing the code and deriving an implementation from a human mind is what leads to producing, on average, more robust code.
I’m not categorically opposed to use of LLMs in the kernel but it is obviously an area where caution needs to be exercised, given that it’s for a kernel that millions of people use.
I never said that.
You may think that, but preliminary controlled studies do show that more security vulns appear in code written by a programmer who used an AI assistant: https://dl.acm.org/doi/10.1145/3576915.3623157
More research is needed of course, but I imagine that because humans are capable of more sophisticated reasoning than LLMs, the process of a human writing the code and deriving an implementation from a human mind is what leads to producing, on average, more robust code.
I’m not categorically opposed to use of LLMs in the kernel but it is obviously an area where caution needs to be exercised, given that it’s for a kernel that millions of people use.