This plugin connected OpenAI models to the Obsidian workspace in a way that respected how people actually write: with an active note, local context, and a need for output control instead of a generic chat window.
Problem
Standard chat interfaces pull users away from their notes and rarely understand the immediate working context. The goal was to make AI assistance feel like part of the note-taking environment rather than a separate destination.
Solution
I built a command-driven plugin with a modal prompt interface, optional note-context injection, and multiple output modes. Users can summarize, rewrite, generate fresh note content, or insert responses back into the current file while controlling formatting, model choice, and output constraints.
Core actions
Prompt, summarize, rewrite
Context source
Active note aware
Output modes
Inline or new note
API model
Streaming responses
Design decisions
- Streaming responses kept the modal responsive during longer generations.
- Settings persisted API config and workflow preferences so the tool stayed usable across sessions.
- Rewrite behavior was explicitly separated from new-note generation to reduce accidental data loss.
- Formatting controls helped keep output aligned with Markdown-heavy knowledge workflows.
Note
Missing detail I estimated
The source notes did not include the exact model lineup currently used, release maturity, or whether this plugin was distributed publicly. Those are the key open details for a stronger product story.
Outcome
The plugin shows how AI assistance can be embedded into an existing writing environment in a way that is context-aware, controlled, and operationally useful rather than gimmicky.