Example use case: A copiloted text editor
Using the in-line dependent type to create a copiloted text editor.



Writing is slow, AI can be used to provide suggestions and speed up this process, the faster you can get those suggestions, and the better quality they are, the faster you can write. Many people turn to ChatGPT despite the high time costs of copying back and forth as you can interact with suggestions freely. Furthermore, AI can help you a lot better if it has access to the context in which you're working, not just a snippet that you can be bothered to copy and paste. A true writing copilot should provide you with the best of all four.
Ideal Copilot:
- Close to your work
- Good Quality
- Interactive
- Uses Context
We believe there are 4 common use cases for a copiloted text editor:
- Suggestion - You're looking for inspiration on what to write next, you would like a few examples to pick from
- Inline edits - You're not happy with how a sentence reads and are looking to rephrase it
- Refactoring - You would like to change the tone or fix a recurring mistake in a document in one go
- Informed suggestions - You're writing an email and want to refer back to prior emails without reading through the entire chain, so you're looking for suggestions of what to write next as well as "grounding" to the information used in prior emails in making that suggestion.
In this blog post we'll develop a schema that can achieve the first 3 text editor use cases. This tool will accept a highlighted portion of text and return a suggestion, each use case is possible since our replacement schema type is flexible and accepts both empty highlights and non-empty highlights.
Copiloted Editor Implementation
We're looking to accept a highlighted portion of text and return a suggestion, to do so we'll use the `replacement` schema type. To keep things simple we'll make just a single suggestion, but the schema could easily be extended as an array of suggestions to support any number of suggestions.
A schema to achieve this is as follows:
1const schema = {
2 replacement: {
3 type: "replacement",
4 documentId: "text",
5 start: selection.start,
6 end: selection.end,
7 },
8};
Creating system and user prompts.
In this case, the LLM prompt is very simple, we just need to set the tone for the type of suggestions we're looking for, as well as emphasizing what we would like the LLM to pay special attention to (spelling, grammar, punctuation, creativity, etc). Finally we provide the query from the user for the LLM to respond to.
1const prompt = `You are a copiloted text editor and an expert writer the user needs help writing, offer intelligent suggestions,
2be creative and carefully consider context and the user's needs and what they mean by their command. Give responses in full sentences when appropriate.
3Carefully consider grammar, capitalization, punctuation, and spelling.
4
5User's command: "${suggestionQuery}";`
Bringing It Together.
The COGA API `v1/generate` endpoint expects the prompts and schema as its inputs. Here is what the final code for a `copilot suggestion` might look like:
1export async function copilot_suggestion(
2 workingDocument: string,
3 suggestionQuery: string,
4 selection: { start: number; end: number },
5) {
6 // JSON Schema for the response
7 const schema = {
8 replacement: {
9 type: "replacement",
10 documentId: "text",
11 start: selection.start,
12 end: selection.end,
13 },
14 };
15
16 const prompt = `You are a copiloted text editor and an expert writer the user needs help writing, offer intelligent suggestions,
17be creative and carefully consider context and the user's needs and what they mean by their command. Give responses in full sentences when appropriate.
18Carefully consider grammar, capitalization, punctuation, and spelling.
19
20User's command: "${suggestionQuery}";`;
21
22 const documents: { id: string; text: string }[] = [
23 { id: "text", text: workingDocument },
24 ];
25
26 const response = await fetch("https://coga.ai/api/v1/generate", {
27 method: "POST",
28 headers: {
29 "Content-Type": "application/json",
30 Authorization: `Bearer ${COGA_API_KEY}`,
31 },
32 body: JSON.stringify({
33 schema: schema,
34 documents: documents,
35 prompt: prompt,
36 }),
37 });
38
39 const data = await response.json();
40 // The response will be a JSON object with a single key "replacement"
41 return data;
42}
In future versions of the COGA AI API we'll be enabling stateful endpoints, this will enable superior interaction for the user by offering the ability to continuously iterate on a suggestion.
Final Results
We went ahead and built an example mini-app to demonstrate our API in action and making suggestions...
To play around with this example (this works best on desktop devices):
- (Optional) Write something in the text box or click on one of the buttons below to select an initial text.
- Click inside the text box and select some text. Press(or click the button on the toolbar) to open up the prompt section.
- Write a prompt in the prompt section and submit it to see the copilot make changes to the text.
- Accept or reject the changes made by the copilot.
Disclaimer: This Demo is stable running on an 8 Billion parameter model - GPT4 has 175 Billion for comparison