Avatar

Wes Kim

(Unyoung)

Read Resume

AI: Recap & What's coming (2026)

thumbnail

It's been about three years since GPT hit the mainstream. At this point, I'm comfortable saying the first inning is behind us and the dust is finally settling. The early excitement has turned into clearer patterns about what is real, what is hype, and where value is actually being created.

Here are a few patterns I keep coming back to.

AI will widen intellectual inequality

Before I jump into topics more technical, here's my prediction on how it'll affect the society high-level. My short answer is that it will widen "intellectual inequality."

AI is an amplifier. It amplifies mental laziness, and it amplifies intellectual curiosity. You can use it to outsource even the smallest tasks and slowly stop thinking. Or you can use it to learn hard concepts faster than you thought possible, explore more ideas, and compound your capabilities.

A useful analogy is fitness. Muscles grow through a simple mechanism: time under tension (TUT), the total time a muscle is actively working during a set. Your brain works in a similar way. Progress comes from sustained effort, attention, and deliberate struggle. AI changes the equation in two directions. It can help you reduce TUT by doing the thinking for you. Or it can increase your output for the same TUT by making each unit of effort more productive.

There is a historical parallel in the adoption of automobiles. Cars let us travel farther and do more. They also made it easier to move less. My guess is that physical health inequality was less pronounced before cars, simply because daily life required more movement. Once that baseline disappeared, we saw the rise of an entire health and fitness industry to compensate.

I think we will see the same pattern for the mind. As AI removes friction from knowledge work, a new "mental fitness" layer will emerge. Tools, habits, and training systems that help people maintain attention, build reasoning stamina, and develop taste. In a world where intelligence is increasingly assisted, the differentiator will be who continues to train it. My bet is that mental fitness will mirror physical fitness, with adoption led by higher income countries and communities.

From scaling to research

As Ilya Sutskever has put it, we are moving from the age of scaling to the age of research. The last few years were dominated by scale. More GPUs, more data, more compute, and bigger models. That playbook worked incredibly well, but we are now seeing diminishing returns. Scaling still matters, but it is no longer the whole story.

The next phase looks less like "bigger models" and more like systems that are more useful in the real world. Personalized AI that adapts to individuals. Agents that can plan and act over time. Population scale simulations that let us test decisions before we make them. Reinforcement learning that improves autonomy and robustness.

In plain terms, the focus is shifting from models that talk to computers that do. The hard problem is no longer prompt obedience. We can already get high quality responses. The hard problem is navigating ambiguity. Real environments are messy, incomplete, and full of exceptions. That is where the next breakthroughs and the next businesses will be built.

OpenAI did not actually kill that many startups

Every OpenAI launch gets followed by the same meme cycle: "OpenAI just killed thousands of startups." Voice release means voice startups are dead. Agents release means automation platforms are dead. New tool release means infrastructure startups are dead.

My take is that this is mostly a knee jerk reaction to DevDay energy and distribution envy. OpenAI has massive consumer reach and first mover advantage, so people assume anything adjacent to their roadmap becomes obsolete. In practice, when you compare like for like, it is rarely true that OpenAI's version is strictly better than specialized products.

Their workflow tooling is not better than dedicated automation platforms like n8n or Make for teams that live inside those systems. Their releases often function more like experimental reference implementations than finished products that win day to day usage. We have seen this pattern before. The "ChatGPT app store" moment was supposed to change everything, and yet distribution and adoption have been far more uneven than the hype suggested.

Most companies, especially in B2B, prefer purpose built tools that are deeply integrated into their stack, supported aggressively, and developed with a narrow user in mind. A side product inside a general AI platform rarely becomes the system of record.

Vector databases are a good example. OpenAI introduced adjacent functionality, but serious teams building retrieval systems still rely on dedicated vendors because reliability, observability, latency, cost control, and deployment options matter. OpenAI's offering is convenient for prototyping and lightweight use cases, not a default choice for production infrastructure. The potential downfall of Vector DBs themselves is a separate topic in itself, but you get the gist.

The right way to interpret many OpenAI launches is not "this replaces the category." It is "this expands the baseline and validates demand." In consumer categories, OpenAI can absolutely crush. In B2B, the story is more nuanced and often favors specialists.

Moving from augmentation to automation: Software-as-a-service to Service-as-a-software.

From 2023 through 2025, the dominant pattern was augmentation. Tools that made professionals faster and more effective. Cursor for developers. Harvey for legal work. Vertical software that embedded copilots into existing workflows.

The next wave is automation. Not replacing entire professions overnight, but automating tasks inside professions. Start with low risk, repetitive work. Move up the stack as reliability improves, regulation evolves, and trust accumulates. The leap from augmentation to automation faces far bigger technical and psychological barriers than most people assume.

You can think of it like self driving. The early years were full of disputes because the stakes are high and failure is visible. Then the systems quietly get better than humans on specific routes and in specific conditions. Even then, adoption lags capability because the tolerance for machine error is lower than the tolerance for human error.

That gap is rational. When a human makes a mistake, we can often understand why. When an AI makes a mistake, it can be harder to explain, harder to predict, and harder to assign responsibility. Trust will grow, but it will grow in constrained environments first. Clear policies, strong auditability, and obvious accountability will matter as much as raw model performance.

Initially, to calm these disputes and doubts, companies are taking a form of an agency or what YC calls "Full Stack AI Firms" such as Crosby with human in the loop. Very quickly, the line between software and service is becoming blurry, and the change is already happening.

Raising pre seed and seed is harder, and that is rational

The bar has moved. You increasingly need a product and at least one real customer to raise early rounds. That feels fair.

AI has lowered the cost of building to the point where a deck and a strong narrative are not enough for most "thin" products. If a team can ship an MVP quickly, investors want to see execution. If the company is truly technical, deep infrastructure, hard science, defensible data, unique distribution, then you can still raise earlier. But for many application layers, the market is asking for proof.

Conclusion

I think long-horizon predictions are often easier than short-term ones because of regression to the mean: extreme outcomes tend to be followed by results closer to average due to normal randomness, not a real shift in fundamentals. Over longer timeframes, the noise washes out and the underlying direction becomes clearer.

So while we'll keep seeing new waves of hype surge and fade, but AI isn't a passing trend — it's here to stay.

2026 — Wes Kim's Personal Website