Last week at AWS re:Invent, amid many product announcements and cloud messages, AWS introduced AWS AI Factories. The press release emphasizes accelerating artificial intelligence development with Trainium, Nvidia GPUs, and reliable, secure infrastructure, all delivered with the ease, security, and sophistication you've come to expect from Amazon's cloud. If you're an enterprise leader with a budget and a mandate to "do more with AI," the announcement is likely to prompt C-suite inquiries about deploying your own factory.
Lyft has rearchitected its machine learning platform LyftLearn into a hybrid system, moving offline workloads to AWS SageMaker while retaining Kubernetes for online model serving. Its decision to choose managed services where operational complexity was highest, while maintaining custom infrastructure where control mattered most, offers a pragmatic alternative to unified platform strategies. Lyft's engineers migrated LyftLearn Compute, which manages training and batch processing, to AWS SageMaker, eliminating background watcher services, cluster autoscaling challenges, and eventually-consistent state management, which had consumed significant engineering effort.
"As efforts shift from hype to execution, businesses are under pressure to show ROI from rising AI spend," the company wrote. "Large-cap CEOs are seeing solid returns on current programs, particularly across administration, internal efficiency, and customer-facing applications. However, 84% of these CEOs predict that positive returns from new AI initiatives will take longer than six months to achieve. In contrast, investors are pushing for faster impact: 53% expect positive ROI in six months or less."
Slurm is used to schedule computing tasks and allocate resources within large server clusters in research, industry, and government. SchedMD was founded in 2010 by the original developers of Slurm. The company not only focuses on the further development of the software, but also provides commercial support and advice to organizations that use Slurm in production. According to SiliconANGLE, SchedMD serves several hundred customers, including government agencies, banks, and organizations in the healthcare sector.
I recently implemented a feature here on my own blog that uses OpenAI's GPT to help me correct spelling and punctuation in posted blog comments. Because I was curious, and because the scale is so small, I take the same prompt and fire it off three times. The pseudo code looks like this: for model in ("gpt-5", "gpt-5-mini", "gpt-5-nano"): response = completion( model=model, api_key=settings.OPENAI_API_KEY, messages=messages, ) record_response(response)
The conversation about AI in the workplace has been dominated by the simplistic narrative that machines will inevitably replace humans. But the organizations achieving real results with AI have moved past this framing entirely. They understand that the most valuable AI implementations are not about replacement but collaboration. The relationship between workers and AI systems is evolving through distinct stages, each with its own characteristics, opportunities, and risks. Understanding where your organization sits on this spectrum-and where it's headed-is essential for capturing AI's potential while avoiding its pitfalls.
It is becoming increasingly difficult to separate the signal from the noise in the world of artificial intelligence. Every day brings a new benchmark, a new "state-of-the-art" model, or a new claim that yesterday's architecture is obsolete. For developers tasked with building their first AI application, particularly within a larger enterprise, the sheer volume of announcements creates a paralysis of choice.
Remember the Hans Christian Andersen story The Emperor's New Clothes? It's about an emperor who is convinced by some vendors' BS to buy a set of what's described as a beautiful set of clothes. There's only one problem; the clothes are imaginary. When the emperor wears (or actually doesn't wear) the clothes in a big parade, his constituents are afraid to say that he's wearing no clothes. Until a young child blurts out the truth: "The emperor is wearing no clothes!"
The bank's analysts are forecasting that AI's share of the overall data center market will double to 30% over the next two years, eating into the share from conventional cloud workloads. By 2030, the firm said in a new , overall power consumption from data centers looks set to jump 175% from 2023 levels - more than the 165% that the firm previously forecast.
Test-time scaling for AI agents is increasingly shifting from longer thinking to controlling tool calls. In many practical applications, such as web search and document analysis, the number of external actions determines how deep an agent can dig. Each tool call increases the context window, increases token consumption, and incurs additional API costs. For companies, this can quickly add up.
The world is converging onto two AI stacks. One is championed by the People's Republic of China (PRC), state-directed, closed, and surveillance-heavy; ours is democratic, market-driven, and safety-aligned. Every country will end up on one of these two stacks, whether they choose to or not. The strategic imperative for the US is to ensure that the democratic stack prevails.
AI may be blamed for this year's layoffs, but a new global survey says the technology could fuel a rebound in some entry-level hiring next year. Public-company CEOs say AI is creating more jobs in 2026, according to an annual outlook survey conducted by advisory firm Teneo released this month. Sixty-seven percent of theCEOs surveyed said they expect AI to increase entry-level hiring in 2026, and 58% said they plan to add senior-leadership roles as well.
The question isn't whether agentic AI will change legal work. It's whether firms will change how they adopt technology. Successful adoption requires both well-designed technology and robust people-centered strategies. You can't technology your way out of habit formation challenges, and you can't adoption-strategy your way out of poorly designed tools. Most organizations are investing heavily in one while underinvesting in the other.
Zoom released its AI assistant to the web today as part of its AI Companion 3.0 release. The company is also allowing free users to access the assistant's features, such as summarizing the meetings, listing action items, or getting insights from meetings with limits. The company said that basic plan users get to use the AI companion within three meetings every month, which will each include a meeting summary, in-meeting questions, and AI note-taking capabilities.
Those tools are also expanding as they gather and model more prompt data, so that companies can see their AI visibility in Google's AI Overviews and AI Mode. That's a big deal. Previously, publishers and brands have been largely flying blind when it came to AI-driven discovery but now they are getting rare visibility into a part of AI search that has largely operated as a black box.
For the last year and a half, two hacked white Tesla Model 3 sedans each loaded with five extra cameras and one palm-sized supercomputer have quietly cruised around San Francisco. In a city and era swarming with questions about the capabilities and limits of artificial intelligence, the startup behind the modified Teslas is trying to answer what amounts to a simple question: How quickly can a company build autonomous vehicle software today?
Technology stocks are buzzing this morning with a wave of developments. Among them, Tesla ( Nasdaq: TSLA) has captured the spotlight. Wedbush tech analyst Dan Ives is calling 2026 a "monster year" for Tesla and Elon Musk as the EV maker leans harder into autonomous driving and robotics. He sees Tesla's valuation climbing to around $2 trillion next year, with a bull-case scenario of $3 trillion by year-end 2026 amid a successful AI strategy. Wedbush has reemphasized its "outperform" rating on TSLA stock.
Inside some of the Air Force's oldest refueling aircraft, technicians are crawling through tight, dirty spaces, painstakingly cleaning sealant on fuel tanks and tightening loose rivets. They climb into the dark, cramped tanks with little more than a flashlight, some tools, and shaky comms. It can be hard to breathe, the air smells like jet fuel, the fixes aren't always clear, and the punishing work can be dangerous if done wrong.
Merriam-Webster has settled on a word that represents 2025 - and that word is "slop." The dictionary-maker defines "slop" as "digital content of low quality that is produced usually in quantity by means of artificial intelligence," something that many people have become familiar with as AI-generated content permeates the internet. This year, some of the most popular sites on the web took steps to stave off the infestation of AI slop, including YouTube, Wikipedia, Spotify, and Pinterest.
Early on Monday, the supply of new cryptocurrency tied to Bittensor-a decentralized network of AI projects-dropped by half. The halving was the first the currency has experienced and came about by design, reflecting how Bittensor shares the same anti-inflationary architecture as Bitcoin. The event also serves a milestone for one of the most novel and ambitious cryptocurrencies to launch in years.
The AI transparency law mandates that advertisements clearly identify when they feature synthetic performers-digitally created media designed to appear as real people. The law aims to prevent consumers from being misled by content that blurs the line between reality and artificial creation. The second law updates New York's rights of publicity by requiring companies to obtain consent from heirs or executors before using a deceased individual's name, image, or likeness for commercial purposes.
Demand for coders has collapsed. Up until this year, programming has been considered one of the most secure, predictable, and lucrative career options. But now, we're seeing reports that employment for programmers has collapsed to its lowest level since 1980. On the surface, the connection is obvious. AI agents are able to write code and do so much faster and cheaper than professional programmers. Code is structured text, something AIs are particularly well-suited to understand and reproduce.
For months, Tesla's robotaxis in Austin and San Francisco have included safety monitors with access to a kill switch in case of emergency - a fallback that Waymo currently doesn't need for its commercial robotaxi service. The safety monitor sits in the passenger seat in Austin, and in the driver seat in San Francisco. Neither service is fully open to the public yet, relying instead on customer waitlists.
Every device, system, or application we touch at work and home is designed and enabled around standards. Who comes up with these standards? They are formulated by technology or domain specialists, many either working on a volunteer basis or through their companies, committed to advancing the capabilities of their chosen technology areas in an ever-changing economy. Many of the standards bodies that coalesce and hammer out common standards are always looking for interested professionals willing to contribute their time and insights.
Google Gemini is quickly becoming my favorite versatile AI tool. Not only is the quality of output that the latest AI model, Gemini 3 Pro, generates impressive, but Google has added a few great features that streamline interactions with the AI tool.