This guide shows you how to set up and interact with Model Context Protocol (MCP) servers to extend gpt2099's capabilities.
MCP servers are external tools that provide additional capabilities like file editing, web search,
or database access. gpt2099 integrates with these servers through the cross.stream generator
pattern, allowing you to experiment with and understand server capabilities before using them in
conversations.
gpt mcp register filesystem "npx -y @modelcontextprotocol/server-filesystem /path/to/allowed/directory"This spawns the server as a cross.stream generator, making its tools available for use.
gpt mcp tool list filesystemThis shows you what tools the server provides:
──#──┬───────────name────────────┬─────────────────────────────────────
0 │ read_file │ Read the contents of a file
1 │ read_multiple_files │ Read multiple files at once
2 │ write_file │ Write content to a file
3 │ create_directory │ Create a new directory
...
Before using tools in conversations, test them manually:
gpt mcp tool call filesystem read_file {path: "/path/to/file.txt"}Test tools to understand their input format and expected output.
Once registered, specify servers when making requests:
"Read the contents of config.json and explain its structure" | gpt --servers [filesystem] -p milliThe LLM will automatically use the appropriate tools from the filesystem server.
You can use multiple servers simultaneously:
gpt mcp register web-search "npx -y @modelcontextprotocol/server-brave-search"
"Research current best practices and update our README.md file" | gpt --servers [web-search, filesystem] -p kiloWhen the LLM wants to use a tool, you see the proposed tool call and have several options:
┌─────────────┬──────────────────────────────────────────────────────────┐
│ name │ filesystem___write_file │
│ input │ {path: "config.json", content: "{\n \"updated\": true}"}│
└─────────────┴──────────────────────────────────────────────────────────┘
Execute?
> yes
no: do something different
no
activate: yolo
Options:
- yes - Execute the tool call as proposed
- no: do something different - Provide custom input or alternative response
- no - Skip this tool call and stop
- activate: yolo - Enable YOLO mode for automatic execution
YOLO mode automatically executes tool calls without prompting:
# Enable YOLO mode via environment variable
$env.GPT2099_YOLO = true
"Update all config files" | gpt --servers [filesystem] -p milliOr activate it during a conversation by selecting "activate: yolo" when prompted. Once activated, all subsequent tool calls in the session execute automatically.
When you select "no: do something different", you can provide custom input:
Enter alternative response: The file already exists and shouldn't be modified
This sends your custom response as the tool result, allowing you to guide the conversation without executing the actual tool.
gpt mcp listShows all currently running MCP servers.
Servers run as cross.stream generators and will:
- Automatically restart if they crash
- Be available across multiple conversations
- Terminate when the cross.stream session ends
Here are some useful MCP servers to try:
File System Operations:
gpt mcp register filesystem "npx -y @modelcontextprotocol/server-filesystem /workspace"Web Search:
gpt mcp register brave-search "npx -y @modelcontextprotocol/server-brave-search"Git Operations:
gpt mcp register git "npx -y @modelcontextprotocol/server-git"Database Access:
gpt mcp register postgres "npx -y @modelcontextprotocol/server-postgres postgresql://user:pass@localhost/db"The recommended approach for working with new MCP servers:
- Register the server
- List available tools to understand capabilities
- Test individual tools with
gpt mcp tool call - Use in conversations with
--serversflag - Iterate based on results
Hands-on experimentation builds understanding of server capabilities and effective prompts.
See the commands reference for complete MCP command options.