Skip to content

Commit aaec95b

Browse files
Mossakaclaude
andauthored
test: add smoke testing (#97)
* test: add smoke testing Signed-off-by: Jiaxiao (mossaka) Zhou <duibao55328@gmail.com> * fix: remove setup go step from smoke-claude Signed-off-by: Jiaxiao (mossaka) Zhou <duibao55328@gmail.com> * fix: add npm ci step Signed-off-by: Jiaxiao (mossaka) Zhou <duibao55328@gmail.com> * fix: use abs path for awf in sudo Signed-off-by: Jiaxiao (mossaka) Zhou <duibao55328@gmail.com> * fix: add missing GitHub domains to smoke-claude whitelist The GitHub MCP server needs api.github.com to make API calls, and Playwright needs github.com for browser navigation. Without these domains in the whitelist, all network-based MCP tools fail with connection errors. Fixes: - GitHub MCP: 'Get https://api.github.com/...: Forbidden' - Playwright: 'ERR_TUNNEL_CONNECTION_FAILED at https://github.com' - Serena MCP: Failed to launch (likely needs network during startup) * fix: add safeoutputs MCP tools to allowed-tools in smoke-claude Added missing safeoutputs MCP server tools to the --allowed-tools list in the smoke-claude workflow: - mcp__safeoutputs__add_comment - mcp__safeoutputs__add_labels - mcp__safeoutputs__create_issue - mcp__safeoutputs__missing_tool - mcp__safeoutputs__noop This resolves permission errors where Claude Code was requesting access to these tools but they weren't granted. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: add PyPI domains to smoke-claude allowed domains Add files.pythonhosted.org and pypi.org to the allowed domains list to enable the serena MCP server to download Python dependencies via uvx. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: remove serena MCP server from smoke-claude workflow The serena MCP server requires uvx (Python package runner) which is not installed in the agent container. Since serena is commented out in the tools configuration and not actively used in smoke tests, removing it from the MCP server configuration. This resolves the "MCP server(s) failed to launch: serena" error. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * feat: configure serena MCP server with Docker Changed serena MCP server from uvx to Docker-based execution to avoid Python dependency issues in the agent container. The serena image from ghcr.io/oraios/serena:latest will be pulled and run inside the firewall. Changes: - Use Docker instead of uvx for serena MCP server - Mount workspace as /workspaces/projects for serena access - Add ghcr.io to allowed domains for image pulling 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> --------- Signed-off-by: Jiaxiao (mossaka) Zhou <duibao55328@gmail.com> Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
1 parent a2c5d48 commit aaec95b

3 files changed

Lines changed: 15345 additions & 0 deletions

File tree

Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
## MCP Response Size Limits
2+
3+
MCP tool responses have a **25,000 token limit**. When GitHub API responses exceed this limit, workflows must retry with pagination parameters, wasting turns and tokens.
4+
5+
### Common Scenarios
6+
7+
**Problem**: Fetching large result sets without pagination
8+
- `list_pull_requests` with many PRs (75,897 tokens in one case)
9+
- `pull_request_read` with large diff/comments (31,675 tokens observed)
10+
- `search_issues`, `search_code` with many results
11+
12+
**Solution**: Use proactive pagination to stay under token limits
13+
14+
### Pagination Best Practices
15+
16+
#### 1. Use `perPage` Parameter
17+
18+
Limit results per request to prevent oversized responses:
19+
20+
```bash
21+
# Good: Fetch PRs in small batches
22+
list_pull_requests --perPage 10
23+
24+
# Good: Get issue with limited comments
25+
issue_read --method get_comments --perPage 20
26+
27+
# Bad: Default pagination may return too much data
28+
list_pull_requests # May exceed 25k tokens
29+
```
30+
31+
#### 2. Common `perPage` Values
32+
33+
- **10-20**: For detailed items (PRs with diffs, issues with comments)
34+
- **50-100**: For simpler list operations (commits, branches, labels)
35+
- **1-5**: For exploratory queries or schema discovery
36+
37+
#### 3. Handle Pagination Loops
38+
39+
When you need all results:
40+
41+
```bash
42+
# Step 1: Fetch first page
43+
result=$(list_pull_requests --perPage 20 --page 1)
44+
45+
# Step 2: Check if more pages exist
46+
# Most list operations return metadata about total count or next page
47+
48+
# Step 3: Fetch subsequent pages if needed
49+
result=$(list_pull_requests --perPage 20 --page 2)
50+
```
51+
52+
### Tool-Specific Guidance
53+
54+
#### Pull Requests
55+
56+
```bash
57+
# Fetch recent PRs in small batches
58+
list_pull_requests --state all --perPage 10 --sort updated --direction desc
59+
60+
# Get PR details without full diff/comments
61+
pull_request_read --method get --pullNumber 123
62+
63+
# Get PR files separately if needed
64+
pull_request_read --method get_files --pullNumber 123 --perPage 30
65+
```
66+
67+
#### Issues
68+
69+
```bash
70+
# List issues with pagination
71+
list_issues --perPage 20 --page 1
72+
73+
# Get issue comments in batches
74+
issue_read --method get_comments --issue_number 123 --perPage 20
75+
```
76+
77+
#### Code Search
78+
79+
```bash
80+
# Search with limited results
81+
search_code --query "function language:go" --perPage 10
82+
```
83+
84+
### Error Messages to Watch For
85+
86+
If you see these errors, add pagination:
87+
88+
- `MCP tool "list_pull_requests" response (75897 tokens) exceeds maximum allowed tokens (25000)`
89+
- `MCP tool "pull_request_read" response (31675 tokens) exceeds maximum allowed tokens (25000)`
90+
- `Response too large for tool [tool_name]`
91+
92+
### Performance Tips
93+
94+
1. **Start small**: Use `perPage: 10` initially, increase if needed
95+
2. **Fetch incrementally**: Get overview first, then details for specific items
96+
3. **Avoid wildcards**: Don't fetch all data when you need specific items
97+
4. **Use filters**: Combine `perPage` with state/label/date filters to reduce results
98+
99+
### Example Workflow Pattern
100+
101+
```markdown
102+
# Analyze Recent Pull Requests
103+
104+
1. Fetch 10 most recent PRs (stay under token limit)
105+
2. For each PR, get summary without full diff
106+
3. If detailed analysis needed, fetch files for specific PR separately
107+
4. Process results incrementally rather than loading everything at once
108+
```
109+
110+
This proactive approach eliminates retry loops and reduces token consumption.

0 commit comments

Comments
 (0)