The global AI market hit $391 billion in 2025. That number gets thrown around a lot, but what does it actually mean on the ground? Mostly this: AI stopped being a tech industry conversation and became a business operations conversation. Companies aren’t asking “should we use AI?” anymore. They’re arguing about which tools to buy and how fast to roll them out.
The market numbers, in context
A 35.9% compound annual growth rate sounds abstract until you realize it means the market will be roughly five times its current size within five years. About 83% of companies now use AI in some form, though “use AI” ranges from “we have a chatbot” to “our entire supply chain runs on predictive models.”
The job creation figures are worth questioning too. The 97 million new positions projection from the World Economic Forum sounds huge, but it includes roles like “AI trainer” and “prompt engineer” that barely existed two years ago. The real story is job transformation, not just job creation.

Agentic AI is the big shift
The buzzword of the year is “Agentic AI” — systems that don’t just answer questions but take actions on your behalf. Gartner predicts a third of enterprise apps will include some agentic capability by 2028. That’s the prediction, anyway. The reality right now is messier: most agentic systems still need heavy guardrails and human oversight.
Where AI is making a tangible difference today: Waymo’s robotaxis are handling real passengers in San Francisco and Phoenix. Medical imaging AI is catching things radiologists miss (and occasionally flagging things that aren’t there). Drug discovery timelines are shrinking, though we won’t know the real impact for years since clinical trials take so long.
Open source is changing the game
DeepCogito v2 showed that open-source models can compete on reasoning tasks that used to be locked behind proprietary APIs. The practical benefit? Developers can actually inspect and modify these models instead of treating them as black boxes. That matters for trust, and it matters for companies worried about vendor lock-in.
The open-source approach also keeps prices in check. When Meta releases Llama and anyone can fine-tune it, OpenAI and Google can’t charge whatever they want.
The problems nobody wants to talk about
AI surveillance tools are already deployed in law enforcement across dozens of countries, often with minimal oversight. Workforce automation is happening faster in some sectors than retraining programs can keep up with. And the ethical frameworks being developed? They’re mostly voluntary guidelines with no enforcement mechanism.
None of this means AI is bad. It means the technology is moving faster than our ability to manage its consequences, which has been true of most major technologies throughout history. The difference this time is the speed.