
"Developers who "pair code" with an AI assistant stand to learn as much as they do in traditional human-human pairings, but also show a less critical attitude toward their silicon-based partner's output, academics have found. Pair programming is a common practice in developer circles, though it did not become a formal "pattern" until the turn of this century. The practice is credited with producing better quality code, savings in development time, and knowledge transfer."
"Yet increasingly, developers are working with code assistants, rather than other walking, talking coders. So, researchers at Saarland University in Germany sought to "analyze knowledge transfer in both human-human and human-AI settings." One group of human-human pairs tackled a programming task, while another group of individual developers tackled the task with the assistance of GitHub Copilot. The task involved implementing features within an existing codebase of approximately 400 lines including both Python code and comments, distributed across 5 files."
"The researchers sought to answer two questions. Firstly, "To what extent do the frequency, length, and depth of knowledge transfer episodes differ between human-human pair programming and human-AI pair programming?" And secondly, "How do the quality and diversity of knowledge transfer episodes, including topic types and finish types, vary between human-human pair programming and human-AI pair programming?" The academics then tracked conversational "episodes" between the meat sack duos using a speech recognition tool and used screen recordings to track interactions within the human and Copilot pairs."
Developers who pair with AI assistants learn at comparable levels to traditional human-human pair programming but tend to be less critical of AI output. Researchers compared human-human pairs with individual developers using GitHub Copilot on a task inside a 400-line Python codebase spanning five files. The study measured frequency, length, depth, quality, and diversity of knowledge transfer episodes, tracking speech and screen interactions. The analysis examined topic types and finish types of transfer episodes and noted how utterances contributed to knowledge exchange during the programming sessions.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]