Your agentic loop occasionally continues running after Claude has finished its task. The code checks if len(response.content) > 0 and response.content[0].type == "text" to determine completion. What is wrong with this approach?
Explanation
Text presence does not mean the agent is finished — Claude often returns explanatory text before tool calls. The stop_reason field ("end_turn" vs "tool_use") is the only deterministic signal for whether the agent intends to continue working.
In a multi-agent research system, the synthesis subagent produces a report missing key findings that the web search subagent discovered. Logs show the coordinator invoked the synthesis agent after the web search completed. What is the most likely cause?
Explanation
Subagent context isolation means the synthesis agent starts with only what the coordinator provides in its prompt. There is no automatic inheritance of prior results. The coordinator must explicitly pass the web search findings as part of the synthesis subagent's input.
A customer support agent needs to verify customer identity before processing any refunds. Currently, the system prompt says "Always verify the customer before processing refunds." Production logs show the agent skips verification in 8% of cases. What should you do?
Explanation
When business rules require guaranteed compliance, programmatic enforcement (hooks) is the only approach with zero failure rate. Prompt-based approaches are probabilistic — no matter how emphatic the wording, they cannot guarantee 100% adherence.
An agent has access to 15 tools and frequently selects the wrong one for user requests. What is the most effective architectural change?
Explanation
Tool selection reliability degrades significantly above 5 tools. The architectural solution is distributing tools across specialized subagents, not writing better descriptions for 15 tools or adding a non-LLM classifier that loses context awareness.
Your MCP tool returns {"status": "error", "message": "Operation failed"} when it cannot connect to the database. The agent keeps retrying indefinitely. What is wrong with the error response?
Explanation
Structured error responses with errorCategory and isRetryable flags let the agent distinguish transient failures (worth retrying) from permanent ones (escalate or try an alternative approach). Without this metadata, the agent defaults to retrying blindly.
You need to configure an MCP server that requires an API key for your project team. Where should the API key go?
Explanation
Environment variable expansion (${VAR}) in .mcp.json keeps secrets out of version-controlled files. Hardcoding keys in config files is a critical security anti-pattern, and storing them in CLAUDE.md exposes them in plaintext.
A new team member joins and reports that Claude Code is not following the team's coding standards. Other team members have no issues. What is the most likely configuration problem?
Explanation
User-level CLAUDE.md only exists on each individual's machine. Team standards must be in project-level config (.claude/CLAUDE.md or .claude/rules/) to be shared via version control and automatically available to all team members.
You want to create a codebase analysis skill that produces verbose output and explores multiple files. How should you configure it to avoid polluting the main conversation context?
Explanation
context: fork runs the skill in an isolated sub-agent context. Its exploration output stays separate from the main session. Commands run in the current session and would pollute context with all the verbose output.
Your CI pipeline runs claude "Review this PR for security issues" and it hangs. What is the correct fix?
Explanation
The -p (or --print) flag is the documented way to run Claude Code in non-interactive mode for automated pipelines. It outputs the result directly to stdout without launching the interactive REPL. The other options reference non-existent features.
You are building a data extraction pipeline. Claude outputs JSON that matches your schema structurally but contains fabricated values for fields where the source document had no information. What schema design change would address this?
Explanation
When source documents may not contain certain information, marking those schema fields as optional/nullable prevents the model from fabricating values to satisfy required field constraints. The model can legitimately return null instead of inventing data.
Your extraction pipeline retries failed validations by sending "There were errors in your extraction. Please try again." The retry rarely fixes the issues. What should you change?
Explanation
Generic retry messages provide no signal for correction. Specific error feedback (field name, violation type, expected format) guides the model toward the correct output. This is the same principle as giving a human colleague specific, actionable feedback rather than "try again."
Your team uses the Message Batches API for a weekly compliance report (processed overnight). A new requirement adds a blocking pre-commit hook that must analyze code before developers can commit. How should you handle the new workflow?
Explanation
The Batch API processes requests within up to 24 hours, making it unsuitable for blocking workflows like pre-commit hooks that need immediate results. Use the synchronous API for latency-sensitive tasks and keep the Batch API for latency-tolerant overnight jobs where the 50% cost savings matter.
A customer support agent handles a multi-issue conversation. After 15 turns, the agent references the wrong order number when processing a refund. What context management approach would prevent this?
Explanation
Progressive summarization loses critical details. An immutable "case facts" block preserves exact values (IDs, amounts, dates) in a high-recall position (beginning of context), ensuring they are never lost or altered during conversation summarization.
Your agent's overall extraction accuracy is 96%. After deploying to production, clients report poor results on invoice documents. Investigation shows invoice accuracy is 72% while receipt accuracy is 99%. What metric design flaw allowed this?
Explanation
Aggregate metrics can hide poor performance on specific segments. 99% on receipts + 72% on invoices can average to 96% overall if receipts dominate the dataset. Stratified metrics by document type reveal these hidden failures and prevent surprises in production.
In a multi-agent research system, the final report states "AI adoption grew 40% in 2025" without any source. The coordinator cannot determine which subagent produced this claim or where it came from. What architectural change would prevent this?
Explanation
Information provenance requires structured claim-source mappings from the start. Without them, attribution is irretrievably lost during summarization. Post-hoc verification is unreliable because the original source may no longer be determinable.
Practice Bank Results
Ready for more practice?
Try the official sample questions or dive deeper into each domain.
Sample Questions Exam Domains