Vercel's presence on AI scene

Vercel's presence on AI scene

Tags
react
llm
open-ai
gemini
typescript
dx
framework
vercel
Published
Dec 25, 2025
⚠️
Disclaimer: I'm not affiliated with Vercel and I'm not being paid for this material. My motivation is to stay up to date with the modern ecosystem tools and cloud platforms.
Vercel is a well-known cloud platform behind the Next.js React framework, and it does a lot to provide developers with top-tier solutions for web development. Recently, they received some criticism for tightly coupling their open-source tools with their own infrastructure, which led them to promise being more open and enabling broader adoption of their solutions.
I've been carefully following Vercel's releases this year and decided to put together a short overview of the AI tooling they currently offer. Let's see whether they were able to keep that promise.

1. AI SDK

AI toolkit for TypeScript – the foundation for agentic app development. A unified API layer for working with LLMs from different providers – from OpenAI-compatible APIs to gateways like OpenRouter. It supports streaming, tool calling, structured output, and more, including React bindings.
The library is under very active development and recently reached v6, adding:
  • agents and workflows
  • full MCP support (including remote MCPs)
  • developer tools (finally!) for debugging and token measuring
  • reranking
  • image editing (image-to-image)
  • structured output with tool calls (previously a real pain)
AI SDK deservedly pretends to be a go-to solution for AI-based apps. That said, the competition is strong and worth mentioning: Mastra.ai, LangChain, and GenKit.
Loading code example...

2. AI Elements

AI Elements is a component library and custom registry built on top of shadcn/ui to help you build AI-native applications facter. It provides pre-built components like conversations, messages and more.
The documentation states this very clearly: reusable React components for quickly starting AI apps. You can begin with something simple, like a chat application, and go as far as building interactive, node-based interfaces. There are examples up to a v0 clone.
Loading code example...

3. Streamdown

Streamdown is a standalone React component for rendering LLM responses. By default, it supports GitHub-flavored Markdown, code highlighting, math expressions, diagrams, streamed content, and more. Unlike react-markdown, it comes with preconfigured plugins out of the box, while still allowing you to customize your own rehype and remark plugins.
If your app simply needs to render LLM output, Streamdown is a solid go-to choice.
Loading code example...

4. Workflow tools

This is a more advanced pattern for building durable operations. Workflows can be suspended, retried, streamed, and executed with persistent state across sub-calls.
A workflow is essentially a function composed of steps. Each step is a reliable function that can be queued and safely executed in the future. You can think of it as being able to put every function into a queue, retry it on failure, delay its execution, and persist its result in storage. To see in action, you can play around with an interactive workflow builder (that is built on top of all the instruments you can find in this article).
Workflows rely on a specific infrastructure implementation. They can run both on Vercel and on any other cloud platform (including self-hosted setups) that implements the Worlds architecture. The setup is somewhat complex, but possible.
Workflows are still very early and continue the directive-based pattern (use workflow, use step). I believe we'll see more Worlds providers in the future, with pricing that makes sense for small to medium-sized teams.
Loading code example...

5. AI Gateway

A centralized entry point for your LLM interactions. Instead of spreading your budget across multiple providers (OpenAI, Grok, Vertex, etc.), you get a single place to fund your usage and select the models you need.
AI Gateway includes a $5 monthly credit, removes rate limits, and adds no additional fees on upstream providers – meaning you pay exactly the same price as you would when using a model directly. It also gives you a single dashboard with observability, token usage, and other metrics.
It's similar to OpenRouter (which can be slow in production), but backed by Vercel’s infrastructure.
Loading code example...

6. Sandbox

Isolated code execution on a secure microVM environment. It's useful for AI scenarios where you need to safely run user-provided or generated code.
The base system is Amazon Linux 2023 with a set of preinstalled packages. You can install additional dependencies if needed (with sudo opt-in), and can run continuously for up to 5 hours on an accessible public URL.
Loading code example...

7. v0

An AI app builder, but not only that. The most interesting part is that Vercel also provides a headless v0 SDK, which can be used as a UI and code generator inside your own pipelines, not just as a standalone tool.
In simple terms, you can use most of v0's infrastructure (dialogs, storage, RAG, etc.) while keeping your own UI and ownership model (for example, defining relationships between users and v0 chats).
In addition, the v0 SDK provides a set of tools that enable autonomous agents (your own models) to interact with the v0 Platform API. This includes chat, projects, and other APIs for managing the platform. It allows developers to be selective and avoid relying exclusively on v0's built-in models.
Loading code example...

As you can see from this overview, each tool (except Sandbox) can be used independently, without being tied to Vercel's cloud. The quality of the documentation and the pace of development are genuinely impressive, but at the same time they force developers to stay alert, knowing that the next release might change things significantly. In my view, Vercel has chosen the right strategy: keeping a high quality bar while moving in step with the current pace of the industry, listening to their community and keeping own business model running.