Agentic pre-commit hook with Opencode Go SDK
I’ve been an avid user of Opencode for a long time, even before it became widely popular. It’s not the only coding agent in my toolkit (I also reach for Amp from time to time), but Opencode holds a special place because of its LSP integration and the dead-simple ability to swap models on the fly.
It’s also remarkably extensible. You can write plugins in TypeScript, apply custom themes, build tools and web apps around it, and even extend it with Skills. The community around it is growing fast — check out awesome-opencode or opencode.cafe if you want to explore what people are building.
What I recently discovered, though, is that Opencode has a Go SDK (not to be confused with Opencode Go), and that’s what inspired this whole project. Let’s see what we can build with it.
The Problem: Catching dumb mistakes before commit
I’m the type of developer who absolutely hates typos in code and stray debug print statements — and also the type who produces them constantly. Regular formatters and linters don’t always catch these things, and let’s be honest: other code reviewers aren’t exactly trustworthy either. Nobody really reads the code anymore.
There are tools like CodeRabbit, GitHub Copilot, and Graphite that can review your pull requests after the fact. But what if I want to run similar checks before committing my code? And what if I want those checks to be configurable?
The idea: run an AI-powered code review as a pre-commit hook, using a coding agent that’s already aware of the codebase.
Since I use Opencode daily, let’s build it with that.
Opencode Server
Here’s something a lot of people don’t realize: Opencode isn’t simply a TUI wrapper. It ships with a full server, and the TUI is just one client that talks to it. That means you can connect to the server from anywhere — your phone, a web browser, or a custom Go program.
You can start the server standalone on any port, optionally password-protected:
opencode serve --port 4096This also means you can run multiple Opencode instances and agents simultaneously.
The Architecture
Here’s the high-level flow of what we’re building:
┌──────────────┐ ┌──────────────────┐ ┌────────────────┐
│ git commit │────▶│ pre-commit hook │────▶│ Opencode Server│
│ (triggers) │ │ (Go binary) │ │ │
└──────────────┘ └──────────────────┘ └────────┬───────┘
│ │
│ 1. Get staged diff │
│ 2. Create session │
│ 3. Send diff + prompt │
│ 4. Parse JSON response │
│ 5. Pass / Fail commit │
▼ ▼
┌──────────────────┐ ┌────────────────┐
│ Terminal Output │ │ LLM (Opus) │
│ issues / pass │ │ via Opencode │
└──────────────────┘ └────────────────┘
The Go SDK
The Opencode Go SDK wraps the standard REST API that the server exposes. For our pre-commit hook, we only need three operations:
SDK Method Purpose Session.New Create a fresh review session Session.Prompt Send the diff and get the review back Session.Delete Clean up the session when we’re done
That’s it. Minimal surface area for a focused tool.
Crafting the Prompt
The pre-commit hook ships with a default prompt that asks the LLM to review for typos, unnecessary debug statements, security issues, bugs, and code style violations — essentially a lightweight CodeRabbit that runs locally.
The tricky part is that LLMs answer in an unstructured way by default, which is hard to parse programmatically. So we need to instruct the model to return its answer as strict JSON and hope it actually follows those instructions.
Here’s the full default prompt, which includes the staged git diff:
out, err := exec.Command("git", "diff", "--cached", "--diff-algorithm=minimal").Output()
if err != nil {
fatal("unable to get git diff: %v", err)
return
}
diff := strings.TrimSpace(string(out))
if diff == "" {
fmt.Println("no staged changes to review")
return
}
prompt := `You are a code reviewer. Review the staged git diff below.
Look for typos, unnecessary debug statements, bugs, security issues, and code style problems.
Respond ONLY with a JSON object (no markdown fences, no extra text):
{
"status":"pass|fail|warn",
"issues": [
{"file":"...","line":0,"severity":"error|warning|info","message":"..."}
]
}
If everything looks good, return {"status":"pass","issues":[]}
` + "```git diff:\n" + diff + "\n```"The idea is that users can later customize this prompt per-repo to match their own conventions and priorities.
Talking to Opencode
With the prompt ready, we make a sequence of SDK calls. The flow is straightforward — create a session, send the prompt, read the response, then clean up:
const baseURL = "http://127.0.0.1:4096"
client := opencode.NewClient(
option.WithBaseURL(baseURL),
option.WithMaxRetries(1),
)
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Minute)
defer cancel()
// Create a session
session, err := client.Session.New(ctx, opencode.SessionNewParams{
Title: opencode.F("pre-commit review"),
})
if err != nil {
fatal("unable to create session: %v", err)
return
}
// Always clean up
defer client.Session.Delete(context.Background(), session.ID, opencode.SessionDeleteParams{})
fmt.Fprintln(os.Stderr, "Reviewing staged changes...")
// Send the prompt
resp, err := client.Session.Prompt(ctx, session.ID, opencode.SessionPromptParams{
Parts: opencode.F([]opencode.SessionPromptParamsPartUnion{
opencode.TextPartInputParam{
Type: opencode.F(opencode.TextPartInputTypeText),
Text: opencode.F(prompt),
},
}),
})
if err != nil {
fatal("unable to prompt: %v", err)
return
}
// Extract text from response parts
var text string
for _, part := range resp.Parts {
if tp, ok := part.AsUnion().(opencode.TextPart); ok {
text += tp.Text
}
}A one-minute timeout is more than enough for a typical diff review.
Parsing the Result
The response comes back as a JSON string (if the model cooperated), which we deserialize into simple Go structs:
type Review struct {
Status string `json:"status"`
Issues []Issue `json:"issues"`
}
type Issue struct {
File string `json:"file"`
Line int `json:"line"`
Severity string `json:"severity"`
Message string `json:"message"`
}Then we parse and display the results:
var review Review
if err := json.Unmarshal([]byte(text), &review); err != nil {
fatal("unable to parse json: %v\nraw response:\n%s", err, text)
return
}
fmt.Printf("Review status: %s\n", review.Status)
for _, issue := range review.Issues {
fmt.Printf(" [%s] %s:%d — %s\n", issue.Severity, issue.File, issue.Line, issue.Message)
}
if len(review.Issues) == 0 {
fmt.Println("No issues found!")
}
if review.Status == "fail" {
os.Exit(1)
}
If the status is "fail", we exit with code 1, which tells Git to abort the commit. Simple and effective.
Installing the Hook
There are fancier ways to manage pre-commit hooks — frameworks like pre-commit or tools like Husky. But a pre-commit hook is really just an executable file, so manual installation is trivial.
Drop the following into .git/hooks/pre-commit:
#!/usr/bin/env bash
exec opencode-pre-commitMake sure the Go binary (opencode-pre-commit) is somewhere on your $PATH, and you’re set.
Testing It Out
To see everything in action:
1. Start the Opencode server:
opencode serve --port 40962. Install the hook in any repo you want to test:
vim .git/hooks/pre-commit
chmod +x .git/hooks/pre-commit
3. Make a deliberately bad change (introduce a typo, leave a hardcoded secret, add a debug statement) and try to commit:
git add .
git commit -m "hello people"
If the AI reviewer catches something, you’ll see output like this:
Reviewing staged changes...
Review status: fail
[error] oc/main.go:15 — Hardcoded secret/API key in source code — remove and rotate
[warning] oc/go.mod:3 — Go version 1.25.5 may cause build failures for older toolchains
[info] oc/main.go:16 — baseURL is hardcoded — consider an ENV override for flexibility
exit status 1
The commit is blocked. Fix the issues, re-stage, and try again.
Things to Keep in Mind
I used the Opus 4.6 model for testing and it consistently respected the JSON format and produced useful reviews. Other coding-focused models should work well too.
That said, a few caveats worth noting. Like any LLM output, the results aren’t deterministic — you may get slightly different feedback each time. You might also hit API rate limits depending on your usage. And while the model is surprisingly good at catching real issues, it can occasionally flag things that don’t matter. Treat it as a helpful second pair of eyes, not an infallible gatekeeper.
Links
Opencode: opencode.ai
Opencode Go SDK: github.com/anomalyco/opencode-sdk-go
Source Code: github.com/plutov/opencode-pre-commit


