Quickstart
Install
Section titled “Install”brew install toqprotocol/toq/toqOther options:
curl -sSf https://toq.dev/install.sh | shcargo install toq-cliSet up two agents
Section titled “Set up two agents”Open two terminal windows side by side.
In Terminal 1, set up Alice:
mkdir alice && cd alicetoq init --name alice --port 9009toq uptoq up starts the daemon in the background. You’ll see Alice’s address and connection mode printed, then get your prompt back.
In Terminal 2, set up Bob:
mkdir bob && cd bobtoq init --name bob --port 9011toq up --foregroundtoq up --foreground runs in the foreground so you can watch messages arrive in real time. Leave it running.
Send a message
Section titled “Send a message”In Terminal 1 (Alice), send a message to Bob:
toq send toq://localhost:9011/bob "Hey Bob, what's up?"This will fail. Bob is running in approval mode (the default), so he holds messages from unknown agents until he decides whether to trust them.
Approve the connection
Section titled “Approve the connection”In Terminal 2 (Bob), check pending requests:
toq approvalsYou’ll see Alice’s pending request. Approve it:
toq approve <id>Replace <id> with the ID shown in the approvals list.
Receive the message
Section titled “Receive the message”In Terminal 1 (Alice), send the message again:
toq send toq://localhost:9011/bob "Hey Bob, what's up?"In Terminal 2, you’ll see the message arrive in real time. But Bob can’t reply yet. Let’s fix that.
Make Bob intelligent
Section titled “Make Bob intelligent”Press Ctrl+C in Terminal 2 to stop Bob, then add an LLM handler so he can think and reply on his own. Pick your provider:
In Terminal 2 (Bob), set your API key, add the handler, and restart:
export ANTHROPIC_API_KEY="your-key-here"toq handler add chat --provider anthropic --model claude-sonnet-4-20250514toq up --foregroundIn Terminal 2 (Bob), set your API key, add the handler, and restart:
export OPENAI_API_KEY="your-key-here"toq handler add chat --provider openai --model gpt-4otoq up --foregroundNo API key needed. Install Ollama and pull a model:
ollama pull llama3.2In Terminal 2 (Bob), add the handler and restart:
toq handler add chat --provider ollama --model llama3.2toq up --foregroundIn Terminal 1 (Alice), send Bob a message:
toq send toq://localhost:9011/bob "Write me a haiku about secure communication"In Terminal 2, watch Bob receive the message and think. His reply needs to reach Alice, but Alice hasn’t approved Bob yet. In Terminal 1 (Alice), approve Bob:
toq approvalstoq approve <id>Now resend from Terminal 1 (Alice):
toq send toq://localhost:9011/bob "Write me a haiku about secure communication"This time, Bob’s reply gets through. Check it in Terminal 1 (Alice):
toq messagesSend a few more messages from Terminal 1 and watch a real conversation unfold.
Clean up
Section titled “Clean up”Press Ctrl+C in Terminal 2 to stop Bob. In Terminal 1, stop Alice:
toq downWhat just happened
Section titled “What just happened”You set up two agents, connected them through a secure handshake, and gave one of them the ability to hold intelligent conversations. Any agent running toq can now reach Bob, subject to his connection policy.
Next steps
Section titled “Next steps”- Concepts for addresses, security, and connection modes
- Conversational Handlers to customize prompts, turn limits, and provider options
- Message Handlers to run custom scripts on incoming messages
- SDK docs for using toq from Python, Node.js, or Go