AI Tool Explored: Warp Terminal
1. Introduction
I have been using warp for awhile but mostly for basic tasks plus occasional work with its AI agent (basic Q/A). Last week I wanted to create a changelog file for my hoauth2 library. But I had already added release notes to GitHub for every release, so I didn’t want to do the manual work of copying and pasting. This seemed like a perfect job for the Warp AI agent.
And that’s how this journey started.
2. Generating Changelog from GitHub Releases
I didn’t realize I should record this since I had no expectations at all
First, I tried out the GitHub CLI myself to list all release tags and view release notes in JSON format. Warp was smart enough to auto-complete the commands (one of the nice features of Warp). The command worked as expected.
I had a quick thought about how I would do this manually:
- Dump all release notes (JSON) into one JSON file
- Process this JSON file to generate separate changelogs (markdown) for individual packages
- In GitHub releases, I typically generate one note for all packages released together
Then I started asking the agent to use those commands to dump all release notes into one JSON file. The agent began thinking and generated a shell script to loop through the results of gh release list
and do something (I didn’t bother to examine it initially), then ran it.
Apparently, the script was incorrect since it was generating notes based on git commits rather than releases. I pointed this out, and the agent quickly fixed it. Then came the FUN part.
After my confirmation, the agent tried to run the script but somehow it didn’t work (according to what the agent told me). The agent discovered issues in the script by itself and tried to auto-correct it. After several unsuccessful attempts, the agent switched to Python, generating a script that it was more comfortable with and asked for my confirmation. I did a quick review, which looked fine, and gave the go-ahead. Voilà! All GitHub release notes were saved to a JSON file. The whole process took about 10 minutes.
Next, I asked the agent to generate changelog markdown based on the previously created JSON file, giving very specific instructions:
- Each JSON object should be one section in markdown (h2)
- The release notes (
body
field) should be formatted nicely in bullet points - Generate release notes per library (identifiable via the
name
field)
This was fairly straightforward (requiring only 1 or 2 back-and-forth conversations). Two markdown files were generated. After reviewing them, I requested several improvements:
- Remove links and author information (these were in the GitHub release notes but I didn’t want them in the changelog)
- Make the language describing dependency upgrades consistent (it varied in the GitHub release notes)
- Consolidate sections with subtitles (h3) into bullet points
Again, this was handled efficiently, and here are the final results:
3. Changelog Generation: Second Attempt
I found the first experience incredibly fun! So I decided to record myself generating changelog from GitHub releases again.
Here is the recording: https://youtu.be/PjYT5kUkOoU
- I explicitly asked the agent to use the GitHub CLI to grab release notes.
- I expected this to be trivial, but it took the agent a couple of tries to figure out how to use the CLI to fetch all releases
- The agent saved the release list into a separate JSON file instead of writing a script to loop through the results directly (as in my first experience)
- Nevertheless, it was still reasonably straightforward. The agent figured it out and dumped all release notes into one JSON file.
- Next, I provided a prompt to generate changelog markdown based on this JSON file
- The agent tried VERY HARD to use jq script several times without success
- So I aborted that approach and asked it to use Python instead, which the agent got working quickly
- The markdown file had formatting issues like
* ## new changes
, probably stemming from my request for “No sub-header (h3/h4)”. My intention was to consolidate the text into bullet points, but my instructions may not have been clear enough - I asked for adjustments to the markdown file. The agent began tweaking the Python script that generates the changelog from the JSON file and continued making refinements
- I became impatient (wishing the Thinking process was faster) so I asked the agent to modify the markdown file directly with a couple more prompts, and eventually the changelog files looked good
To me, this second experience wasn’t as smooth as the first one.
4. Improving Documentation for Source Code
Here is the recording: https://youtu.be/DNP9kWQGEPo?si=77RfsXlSHfb_6x8V
I was curious how much the agent could help improve the documentation (source code comments, generated docs).
Even though I asked for improvements to both, the agent focused on the README file first. The Thinking process wasn’t as fast as I’d like, but after several conversations, the agent generated a decent README file.
I asked the agent to commit these changes to a branch, which was a trivial task. Great!
Then I requested improvements and typo fixes in the source code comments. Unsurprisingly, the agent found several typos and made some good suggestions for the code comments.
The agent approached this task one file at a time, which was time-consuming. In a few instances, it generated module-level information that I found redundant (like license, author, etc.). Other than that, all good! Agent was able to commit changes and open PR.
Well done!
5. Generating Terraform Script to Set Up Auth0 Tenant
Here is the recording: https://youtu.be/5NLoRB-eif8?si=WSOR2YLLcE2VcBw2
I started by asking the agent to generate an envrc file to run Terraform on an Auth0 tenant. After the envrc file was generated, the agent automatically began generating Terraform scripts to create resources (App), which pleasantly surprised me.
While the agent was Thinking, I modified the envrc file by adding real client ID, secret, and domain values.
But for some reason, the agent kept adding client ID and secret placeholders back to the envrc!
Even though I explicitly requested OIDC configuration (no SAML!), the agent seemed to ignore this after thinking a few times!
Additionally, the agent created variables.tf
and output.tf
files but still kept outputs and variables in main.tf
.
Eventually, I gave up and aborted the conversation.
I started a new conversation by describing what I needed, and it worked with only a couple of hiccups. When I tried to run the Terraform script, it couldn’t read the envrc file properly. I tried troubleshooting myself without success. Finally, the agent helped diagnose the issue. The envrc file generated initially had incorrect encoding, which the agent quickly fixed!
The agent then found its own errors in the output of the secret and the token endpoint auth method. I wasn’t sure of the correct approach (I thought it required another resource and asked the agent to add it, but it didn’t seem to understand). So I asked to leave those parts out and got a working first version (after a few self-corrections).
Then I asked again to fix the client secret and auth method using another resource, which worked this time!
I feel this wasn’t the greatest experience. Some of the back-and-forth and self-corrections could probably have been avoided from the start. It’s hard to tell whether my prompts should have been more accurate or if the AI should have been smarter!
6. Conclusion
Overall, Warp Terminal’s AI agent works great so far! It excels at generating and modifying file and automating workflows. While not perfect—sometimes requiring multiple attempts or clearer instructions—I think it significantly speeds up many routine tasks.
The self-correction abilities is particularly noteworthy, though the thinking process occasionally feels slow during complex operations. The agent’s ability to troubleshoot its own errors and switch approaches when needed (like moving from shell script to Python) was quite impressive.