Managing the "Empty Chair": How to Onboard AI Developers into Scrum Teams
You have just purchased a license for Devin, the autonomous AI software engineer. You have assigned it a Jira ticket. Now, the Engineering Manager asks a question that stops the room: "Does Devin come to the Standup?"
This is not a joke; it is a governance crisis. We are entering an era where "Digital Workers" contribute code alongside human developers. They don't sleep, they don't complain, and they work 100x faster than your Junior devs. But they also hallucinate, lack context, and can flood your repository with 10,000 lines of unverified code overnight.
This guide is for Scrum Masters and Engineering Leaders who need to integrate AI agents into their Agile ceremonies without breaking the team culture.
1. The "Empty Chair" Problem
The "Empty Chair" refers to the presence of a non-human entity that produces work but cannot participate in human social rituals. If you ignore the Empty Chair, two things happen:
- Shadow IT: Developers use the agent secretly, bypassing quality gates.
- Review Debt: The agent churns out 20 Pull Requests in a night. The human team arrives in the morning and spends the entire day reviewing code instead of building new features.
2. Reimagining the Daily Standup: The "Agent Sync"
Do not have your AI agent "speak" at the daily standup using text-to-speech. It is distracting and adds zero value. Instead, we propose a new sub-ceremony called the Agent Sync.
The Protocol:
- Timing: 15 minutes before the Human Standup.
- Attendees: Tech Lead + Senior Developer (The "Handlers").
- Agenda: Review the Agent's overnight logs. Did it get stuck? Did it hallucinate? Did it finish the task?
- Output: The Tech Lead reports for the agent at the main standup: "Devin completed the API migration last night; I am reviewing the PR today."
3. Story Points: Complexity vs. Effort
In traditional Scrum, Story Points estimate "Effort." A complex database migration might be 8 points because it takes 3 days.
For an AI Agent, that same task might take 10 minutes. Is it now a 1-point story? No.
You must redefine Story Points to measure Risk and Review Complexity. Even if the AI generates the code instantly, a human must verify it. If the verification is risky and complex, the Story Point value remains high.
4. The Definition of Done (DoD) for AI
You cannot trust an agent's definition of "Done." You need a stricter contract for Digital Workers. An AI-generated ticket is only "Done" when:
- The Code compiles and passes all unit tests.
- A Security Scanning tool (like Snyk) finds zero vulnerabilities.
- A Human Engineer has conceptually approved the implementation strategy.
- Documentation has been auto-generated and verified for readability.
Frequently Asked Questions (FAQ)
A: No. Having an AI agent "speak" via TTS is a gimmick that wastes time. Instead, implement an "Agent Sync" where human leads review the agent's overnight logs and output asynchronously before the main standup.
A: Yes, but the meaning changes. Instead of estimating "Effort" (which is near-zero for AI), you estimate "Complexity" and "Risk". A 5-point task is still 5 points because the human effort required to review, validate, and debug the AI's work remains significant.
A: Review Debt occurs when your autonomous agents generate code faster than your human team can review and merge it. This leads to a backlog of unmerged PRs, creating a bottleneck that negates the speed advantage of using AI.