Modern scientific societies are increasingly vulnerable due to their dependence on membership fees and journal subscriptions, which are being challenged by the rise of virtual networking and open-access publishing.
While AI tools are lowering the barrier to development, the gap between speed and manageability is growing. In just over a year and a half, AI code assistants have grown from an experiment to an integral part of modern development environments. They are driving strong productivity growth, but organizations are not keeping up with the associated security and governance issues.
On 13 January 2026, Bandcamp published "Keeping Bandcamp Human", declaring that "music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp", alongside a strict prohibition on AI-enabled impersonation of other artists or styles. The post invites users to report releases that appear to rely heavily on generative tools, and it explicitly reserves the right to remove music "on suspicion of being AI-generated".
Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users' ability to speak and share information.
Librarians have been actively collaborating and talking about it almost every day, whether it's creating tutorials and digital learning objectives or thinking about the conversations to have with instructors. It can feel like cognitive dissonance to be actively working with AI on a regular basis and also saying we're constantly thinking about the harms and the biases.
It's common knowledge that we are awash in misinformation that can have severe negative consequences for society. When people hold false beliefs about the safety of vaccines, the outcomes of elections, or the causes of climate change, it is much more difficult for them to make responsible decisions on behalf of their families and communities. It is tempting to respond to this challenge by insisting that expert scientists know best and to dismiss those who challenge the experts.
A few years ago, I put together what I felt was a truly innovative concept, which I presented in a conference poster at an international meeting in my field. After the presentation, I spoke to another early-career scientist about my work and how it might apply to their findings. Two years later, they scooped me by publishing a preprint paper that presented my idea, with many of the same verbal formulations and an identical flow of ideas, without any acknowledgement or attribution to my work.
As the AI revolution accelerates and continues to reshape traditional business models, it has triggered a cascade of new legal, regulatory and policy challenges. At the forefront of these emerging issues are a growing number of high-stakes legal battles between content creators and major Generative AI (GenAI) companies behind large language models (LLMs). This article examines key legal themes and critical questions arising from recent developments at the intersection of AI and Copyright law.