1) What AI is already good at
AI is strongest when structure exists. Existing codebases, known architectures, and repeatable workflows become high-quality context. This is why coding acceleration is so strong today. It can often port, refactor, and extend known systems faster than humans expect.
Useful reference: the Claude C compiler write-up and what it says about structure-rich tasks.
2) Where value moves next
If skill can be copied, selection quality matters more. In AI-first work, the winning behavior is choosing the best option among many generated options.
Raw skill trends toward commodity. Curated workflows, trusted judgment, and audience trust become premium.
The clearer your intent and constraints, the better your output quality and execution velocity.
As build cost drops, distribution and network become even more important than implementation labor.
3) Product metric shift: time spent in product should approach zero
Historically, many product metrics rewarded engagement time. In an agent-driven world, users will value outcomes with lower interface effort. New north-star candidates:
- Time to completed outcome
- Friction steps per successful task
- Deletion time and reversal time
- User intent-to-result latency
4) PM operating model in AI
Two practical paths are emerging:
- Aggressive path: PM runs problem to production with AI as force multiplier.
- Stepping-stone path: PM prototypes fast, gathers real data, iterates with leadership, then derives PRD/spec/docs from validated code and behavior.
5) First principles I use
- Context is everything: inbox, internet, code, and user history are all context layers.
- Programs exist because reliability matters; long term, reliability/cost curves may shift toward direct intent systems.
- Personal agents will abstract increasing information noise.
- As software build cost trends down, planning overhead should also trend down.