Skip to main content

System Admin's Savior? Generating Command Cheat Sheets with DeepSeek, Gemini, and Copilot

I. Introduction

Are you tired of sifting through messy command notes, struggling to find the right syntax for your HPE Ezmeral Data Fabric Object Store tasks? Managing on-prem Object Stores can be a complex endeavor, often requiring a vast collection of commands for various operations. Creating and maintaining accurate, easily searchable cheat sheets is crucial for efficiency, but the traditional methods can be time-consuming and frustrating.

I recently found myself in this exact situation. My command notes, accumulated over months of working with our HPE Ezmeral Data Fabric environment, had become a disorganized mess. Searching through them in Sublime Text was a nightmare, and I knew there had to be a better way. That’s when I had an “aha!” moment: why not leverage the power of Large Language Models (LLMs) to automate the cheat sheet creation process?

In this blog post, I’ll be sharing my experiment comparing three popular tools and LLM APIs: Roo Code with DeepSeek, Roo Code with Gemini 2.0 Flash, and VS Code GitHub Copilot. I tasked each tool with the same objective: to transform my chaotic command notes into a well-structured, easily searchable cheat sheet. Let’s see which one comes out on top!

II. The Problem: My Messy Command Notes (Before LLMs)

Before diving into the LLM solutions, let’s take a look at the problem I was facing.



As you can see, my notes were a jumbled collection of commands, comments, and random observations. There was no consistent formatting, no clear organization, and searching for specific commands was a tedious process. This lack of structure not only wasted time but also increased the risk of errors when executing commands. I needed a better way to manage my knowledge and streamline my workflow.

III. The LLM Approach: A Three-Way Experiment

To tackle this challenge, I decided to put three LLM-powered tools to the test:

  • Roo Code with DeepSeek: Roo Code is an IDE extension that allows you to integrate LLMs directly into your workflow. I used DeepSeek R1 for the architectural planning and DeepSeek V3 for the code generation.
  • Roo Code with Gemini 2.0 Flash: I also used Roo Code with Google’s Gemini 2.0 Flash for both the architectural planning and code generation.
  • VS Code GitHub Copilot (Pro): GitHub Copilot is a popular AI-powered code completion tool that integrates seamlessly with VS Code. I used the built-in o3-mini for planning and Sonnet 3.5 for code generation.

My goal was to see how each tool would handle the task of organizing and refining my raw command notes into a usable cheat sheet. I provided each tool with the same source data and instructions and then compared the results.

IV. Tool #1: Roo Code with DeepSeek

Roo Code provides a powerful environment for working with LLMs. Roo Code, or its source project - Cline, is a tool that can boost your productivity depending on how you use it.

To instruct Roo Code, I provided the following prompt, referencing several files within the Roo Code environment:

@/rules.md Always follow the rules in this file.
@/Python_Boto3_ObjectStore.md This file is the source data you should process. It's some notes.
@/longtask.md Follow the instructions in this file. Now, design a plan and well prepare, so that you can achieve the goal later.

rules.md: \

```

- When interacting with an LLM provider, avoid reading the contents of compressed files into the context window; simply recognize the file name.

```

longtask.md: \

```

帮我整理一下这个笔记本中的命令。我想将相同的类型的命令放在一起。
如何判断是否相同类型的命令?
- 首先同一个命令放一起。比如`find ...`就视作一个命令;`aws s3api ...`视作一个命令,这是同一个sub-command;然后`aws s3 ls|cp|etc.`视作同一个命令。
- 然后放在同一个区块的命令按照main-command, sub-command等排序。
- 目的相似的命令放在临近的区块中。
- 创建2个层级,层级1: 描述目的,取个名字,比如"文件系统相关"。比如,`find`就是文件系统相关。找文件、编辑文件等就是文件系统相关,`find`就是找文件。层级2: 层级1中的子类别。比如同样是文件系统相关,那么如果操作的对象可以归类,就把同类操作对象的放一起。

```

Let’s break down what each file contains:

  • @/rules.md: This file contains general rules and guidelines for the LLM to follow, such as formatting conventions, preferred coding style, and error handling strategies.
  • @/Python_Boto3_ObjectStore.md: This file contains my raw command notes for interacting with the Object Store using the Python Boto3 library. It’s the messy data that needs to be organized.
  • @/longtask.md: This file provides detailed instructions on the specific tasks I want the LLM to perform, such as extracting commands, categorizing them, adding descriptions, and generating a well-formatted cheat sheet.

After running the prompt, DeepSeek R1/V3 generated a surprisingly well-structured cheat sheet. The commands were organized into logical categories, and each command was accompanied by a brief description. The output was clean, easy to read, and significantly more useful than my original notes.


V. Tool #2: Roo Code with Gemini 2.0 Flash

Next, I tried the same process using Roo Code with Gemini 2.0 Flash. The prompt was identical to the one I used with DeepSeek.


Gemini 2.0 Flash produced a cheat sheet that was similar in structure to DeepSeek’s output. However, I noticed that the note structure generated by Gemini is more tedious, particularly with AWS-CLI-related commands being categorized in a more fine-grained manner.



VI. Tool #3: VS Code GitHub Copilot (Pro)

Finally, I tested VS Code GitHub Copilot. Copilot offers a feature called “Copilot Edits,” which allows you to ask Github Copilot to edit code for you using natural language prompts.


GitHub Copilot Claude-3.5-Sonnet model demonstrates a high capacity for generalization and effectively "deduplicates" commands.

For example:

```bash

aws s3api cp s3://bucket-1/file1 ...

aws s3api cp s3://bucket-2/file1 ...

```

Such identical commands are deduplicated to:

```bash

aws s3 cp [source] [destination] ---url [url] [options]

```

VII. Comparative Analysis: DeepSeek vs. Gemini vs. Copilot

Now, let’s compare the three tools side-by-side:

Here’s a summary of the strengths and weaknesses of each tool:

  • Roo Code with DeepSeek:
    • Strengths: Good overall performance, well-structured output.
    • Weaknesses: The planning and coding stages took the longest time. This may be due to the API provider I used. I used SiliconFlow and Hyperbolic's API endpoints instead of the official DeepSeek API.
  • Roo Code with Gemini 2.0 Flash:
    • Strengths: Detailed and accurate descriptions. Very fast planning and coding compared to DeepSeek R1/V3.
    • Weaknesses: The generated structure is somewhat redundant.
  • VS Code GitHub Copilot (Pro):
    • Strengths: Seamless integration with VS Code, accurate organization and editing.
    • Weaknesses: Maybe it is too good at adding unnecessary steps. For example, I did not explicitly tell it that I need to treat commands with different object parameters but the same subject command as the same command and "deduplicate" these commands. However, Github Copilot's Claude-3.5-Sonnet model proactively helped me do this. It also proactively helped me sanitize sensitive information, such as the access key and other information in my command were replaced by the place holder.

VIII. Conclusion

In this blog post, I’ve shown how LLMs can be used to automate the creation of command cheat sheets for HPE Ezmeral Data Fabric Object Store management. By leveraging tools like Roo Code, DeepSeek, Gemini, and GitHub Copilot, you can transform your messy command notes into well-organized, easily searchable resources.

Furthermore, you can submit these organized technical notes to RAG-like tools/services so that you can search them in a chat-like manner. A relatively lightweight option is Google NotebookLM.

However, we all know that we should either use a completely local deployment or check for sensitive information before handing it over to a service on the Internet.

The benefits of using LLM-powered cheat sheets are clear: increased efficiency, reduced errors, and improved knowledge management. I encourage you to explore these tools and experiment with LLMs for your own System Administrative tasks. The possibilities are endless!

In the future, I am considering trying to practice MCP server when I have time to complete similar tasks more automatically.


Comments

Popular posts from this blog

Web应用程序的MVC架构和三层架构的区别

 今天我正式开始学习Go编程语言! 好几年前我就尝试学习Go的基础,但是我最终没有坚持把基础学完,我现在相当于完全忘记了Go的语法的状态了。 管他的,直接开始学一个综合的Web项目! 从今天开始,我学习孔令飞写的这个课程 -- 《 Go 语言项目开发实战 》。 在第一篇课程文章中,讲师提到了2个常见的软件架构模式,分别是: 前后端分离架构 MVC架构 由于我对于软件开发的知识可以说是菜鸟,从上下文来看,前后端分离似乎意味着"微服务架构",微服务架构中肯定还有许多细分层面的架构模式,比如Saga、DDD等。 而MVC,从上下文来看意味着"单体架构"的其中一种应用程序架构模式。 在这个第一篇教程文章中,与MVC一起提到了一个叫做"三层架构"的软件开发架构,我引用一下原文: 除此之外,还有一种跟 MVC 比较相似的软件开发架构叫三层架构,它包括 UI 层、BLL 层和 DAL 层。其中,UI 层表示用户界面,BLL 层表示业务逻辑,DAL 层表示数据访问。在实际开发中很多人将 MVC 当成三层架构在用,比如说,很多人喜欢把软件的业务逻辑放在 Controller 层里,将数据库访问操作的代码放在 Model 层里,软件最终的代码放在 View 层里,就这样硬生生将 MVC 架构变成了伪三层架构。 我是第一次听说这个"三层架构",光是看这段话我不理解这个三层架构和MVC有什么区别,于是让我问问AI吧。 展开了解下"三层架构" 我的提问: """ MVC 的全名是 Model View Controller,它是一种架构模式,分为 Model、View、Controller 三层,每一层的功能如下:View(视图):提供给用户的操作界面,用来处理数据的显示。Controller(控制器):根据用户从 View 层输入的指令,选取 Model 层中的数据,然后对其进行相应的操作,产生最终结果。Model(模型):应用程序中用于处理数据逻辑的部分。 MVC 架构的好处是通过控制器层将视图层和模型层分离之后,当更改视图层代码后时,我们就不需要重新编译控制器层和模型层的代码了。同样,如果业务流程发生改变也只需要变更模型层的代码就可以。在实际开发中为了更好的...