I. Introduction
Are you tired of sifting through messy command notes, struggling to find the right syntax for your HPE Ezmeral Data Fabric Object Store tasks? Managing on-prem Object Stores can be a complex endeavor, often requiring a vast collection of commands for various operations. Creating and maintaining accurate, easily searchable cheat sheets is crucial for efficiency, but the traditional methods can be time-consuming and frustrating.
I recently found myself in this exact situation. My command notes, accumulated over months of working with our HPE Ezmeral Data Fabric environment, had become a disorganized mess. Searching through them in Sublime Text was a nightmare, and I knew there had to be a better way. That’s when I had an “aha!” moment: why not leverage the power of Large Language Models (LLMs) to automate the cheat sheet creation process?
In this blog post, I’ll be sharing my experiment comparing three popular tools and LLM APIs: Roo Code with DeepSeek, Roo Code with Gemini 2.0 Flash, and VS Code GitHub Copilot. I tasked each tool with the same objective: to transform my chaotic command notes into a well-structured, easily searchable cheat sheet. Let’s see which one comes out on top!
II. The Problem: My Messy Command Notes (Before LLMs)
Before diving into the LLM solutions, let’s take a look at the problem I was facing.
As you can see, my notes were a jumbled collection of commands, comments, and random observations. There was no consistent formatting, no clear organization, and searching for specific commands was a tedious process. This lack of structure not only wasted time but also increased the risk of errors when executing commands. I needed a better way to manage my knowledge and streamline my workflow.
III. The LLM Approach: A Three-Way Experiment
To tackle this challenge, I decided to put three LLM-powered tools to the test:
- Roo Code with DeepSeek: Roo Code is an IDE extension that allows you to integrate LLMs directly into your workflow. I used DeepSeek R1 for the architectural planning and DeepSeek V3 for the code generation.
- Roo Code with Gemini 2.0 Flash: I also used Roo Code with Google’s Gemini 2.0 Flash for both the architectural planning and code generation.
- VS Code GitHub Copilot (Pro): GitHub Copilot is a popular AI-powered code completion tool that integrates seamlessly with VS Code. I used the built-in o3-mini for planning and Sonnet 3.5 for code generation.
My goal was to see how each tool would handle the task of organizing and refining my raw command notes into a usable cheat sheet. I provided each tool with the same source data and instructions and then compared the results.
IV. Tool #1: Roo Code with DeepSeek
Roo Code provides a powerful environment for working with LLMs. Roo Code, or its source project - Cline, is a tool that can boost your productivity depending on how you use it.
To instruct Roo Code, I provided the following prompt, referencing several files within the Roo Code environment:
@/rules.md Always follow the rules in this file.
@/Python_Boto3_ObjectStore.md This file is the source data you should process. It's some notes.
@/longtask.md Follow the instructions in this file. Now, design a plan and well prepare, so that you can achieve the goal later.
rules.md: \
```
```
longtask.md: \
```
```
Let’s break down what each file contains:
@/rules.md: This file contains general rules and guidelines for the LLM to follow, such as formatting conventions, preferred coding style, and error handling strategies.@/Python_Boto3_ObjectStore.md: This file contains my raw command notes for interacting with the Object Store using the Python Boto3 library. It’s the messy data that needs to be organized.@/longtask.md: This file provides detailed instructions on the specific tasks I want the LLM to perform, such as extracting commands, categorizing them, adding descriptions, and generating a well-formatted cheat sheet.
After running the prompt, DeepSeek R1/V3 generated a surprisingly well-structured cheat sheet. The commands were organized into logical categories, and each command was accompanied by a brief description. The output was clean, easy to read, and significantly more useful than my original notes.
V. Tool #2: Roo Code with Gemini 2.0 Flash
Next, I tried the same process using Roo Code with Gemini 2.0 Flash. The prompt was identical to the one I used with DeepSeek.
Gemini 2.0 Flash produced a cheat sheet that was similar in structure to DeepSeek’s output. However, I noticed that the note structure generated by Gemini is more tedious, particularly with AWS-CLI-related commands being categorized in a more fine-grained manner.
VI. Tool #3: VS Code GitHub Copilot (Pro)
Finally, I tested VS Code GitHub Copilot. Copilot offers a feature called “Copilot Edits,” which allows you to ask Github Copilot to edit code for you using natural language prompts.
GitHub Copilot Claude-3.5-Sonnet model demonstrates a high capacity for generalization and effectively "deduplicates" commands.
For example:
```bash
aws s3api cp s3://bucket-1/file1 ...
aws s3api cp s3://bucket-2/file1 ...
```
Such identical commands are deduplicated to:
```bash
aws s3 cp [source] [destination] ---url [url] [options]
```
VII. Comparative Analysis: DeepSeek vs. Gemini vs. Copilot
Now, let’s compare the three tools side-by-side:
Here’s a summary of the strengths and weaknesses of each tool:
- Roo Code with DeepSeek:
- Strengths: Good overall performance, well-structured output.
- Weaknesses: The planning and coding stages took the longest time. This may be due to the API provider I used. I used SiliconFlow and Hyperbolic's API endpoints instead of the official DeepSeek API.
- Roo Code with Gemini 2.0 Flash:
- Strengths: Detailed and accurate descriptions. Very fast planning and coding compared to DeepSeek R1/V3.
- Weaknesses: The generated structure is somewhat redundant.
- VS Code GitHub Copilot (Pro):
- Strengths: Seamless integration with VS Code, accurate organization and editing.
- Weaknesses: Maybe it is too good at adding unnecessary steps. For example, I did not explicitly tell it that I need to treat commands with different object parameters but the same subject command as the same command and "deduplicate" these commands. However, Github Copilot's Claude-3.5-Sonnet model proactively helped me do this. It also proactively helped me sanitize sensitive information, such as the access key and other information in my command were replaced by the place holder.
VIII. Conclusion
In this blog post, I’ve shown how LLMs can be used to automate the creation of command cheat sheets for HPE Ezmeral Data Fabric Object Store management. By leveraging tools like Roo Code, DeepSeek, Gemini, and GitHub Copilot, you can transform your messy command notes into well-organized, easily searchable resources.
Furthermore, you can submit these organized technical notes to RAG-like tools/services so that you can search them in a chat-like manner. A relatively lightweight option is Google NotebookLM.
However, we all know that we should either use a completely local deployment or check for sensitive information before handing it over to a service on the Internet.
The benefits of using LLM-powered cheat sheets are clear: increased efficiency, reduced errors, and improved knowledge management. I encourage you to explore these tools and experiment with LLMs for your own System Administrative tasks. The possibilities are endless!
In the future, I am considering trying to practice MCP server when I have time to complete similar tasks more automatically.
%20-%20Sublime%20Text%20(UNREGISTERED)%202025_02_19%2017_48_29.png)





Comments
Post a Comment
You are welcome to leave your thoughts, but please do not use verbal attacks.