One of the really interesting findings was the median date when developers started using AI tools. They found it was April 2024, which corresponds fairly neatly to Claude 3 coming out and Gemini 2.5 coming out. This is really the dawn of the reasoning or thinking models, and around that same time, we got much better at tool-calling. For coding tasks, you really need to be able to leverage external information in order to problem solve,
Figma is launching some new updates that allow AI models to directly communicate with its app-building tool and access designs remotely. Figma's Model Context Protocol (MCP) server - a bridge that enables AI models to tap directly into the code behind prototypes and designs created using Figma's tools - has now been expanded to support the design platform's AI prompt-to-app coding tool, Figma Make.
Artificial intelligence has notorious problems with accuracy - so maybe it's not surprising that using it as a coding assistant creates more security problems, too. As a security firm called Apiiro found in new research, developers who used AI produce ten times more security problems than their counterparts who don't use the technology. Looking at code from thousands of developers and tens of thousand repositories, Apiiro found that AI-assisted devs were indeed producing three or four times more code - and as the firm's product manager Itay Nussbaum suggested, that breakneck pace seems to be causing the security gaps.
There is a plague in software development as of today, and it has become something we are gradually beginning to accept. It goes something like this: you're stuck on some code, so you deliver a code snippet to your favorite AI tool, hoping to debug it. You might get a solution, but then your AI tool introduces a new bug, which you suddenly have to spend time debugging.
Nearly three-quarters of organizations have suffered at least one security breach or incident in the last year that can be blamed on insecure coding practices. Analysis from SecureFlag found 74% of organizations have suffered an incident as a result of dodgy code, with nearly half of those hit by multiple breaches. The report comes as AI is beginning to take over some coding duties from developers. Debate remains over whether that code is secure.
"How's that going to work when ten years in the future you have no one that has learned anything," he asked. "My view is you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it, just as much as you ever have."