Pulling Our Weight: Contributing to each other’s projects

pulling-our-weight:-contributing-to-each-other’s-projects

This week I worked together with my classmate Mayank to contribute to each other’s projects. Mayank’s project, dev-mate-cli is a command line tool that reads source code files using AI and generates comments to make it easier to understand. For reference, my project, codeshift, is also a command line tool that uses AI, except to translate source code files to other programming languages. Our task was to add an option to each other’s tool that displays the number of tokens used by the AI provider to process the source files.

GitHub logo

mayank-Pareek
/
dev-mate-cli

A command line tool to quickly document your code

dev-mate-cli

A command-line tool that leverages OpenAI’s Chat Completion API to document code with the assistance of AI models
Watch this Demo video to view features.

Features

  • Source Code Documentation: Automatically generate comments and documentation for your source code.
  • Multiple File Processing: Handle one or multiple files in a single command.
  • Model Selection: Choose which AI model to use with the --model flag.
  • Custom Output: Output the results to a file with the --output flag, or display them in the console.

Installation

  1. Clone the repository:

    git clone https://github.com/mayank-Pareek/dev-mate-cli.git
    cd dev-mate-cli
  2. Install dependencies:

    npm install
  3. Set up environment variables:

    Create a .env file in the project’s root directory with the following configuration, get API_KEY and BASE_URL from model provider of your choice:

    API_KEY=your_api_key
    BASE_URL=base_url
  4. Set up LLM configuration:

    Open config.json file from the project’s root directory and edit default configuration to customize LLM. Visit your API Key…

In the open source world, the vehicle to contributing to other people’s work is a Pull Request, where you “pull” your changes in from a branch or fork and merge it with their code. Before creating a pull request, ideally, you create an Issue, or a report on the problem you’re trying to fix.

I had some prior experience with creating pull requests, but this week, I became very familiar with the concept.

My Contribution

Before creating an issue for the feature, I began by reading Mayank’s code. Mayank used TypeScript for his project, which I’m not overly accustomed to, but have used a little bit of here and there. His code was also split into several modules, a paradigm I recently learned makes testing easier. I thought a good starting point would be to find the file where the API call to the AI was made, since I’d need the data returned from it. Aside from the feature I had to add, I noticed a few improvements that could be made to the project, and noted them down to create issues later.

To create his CLI tool, Mayank used the same module as me – commander.js. This made it easy for me to jump in and identify exactly where I needed to add my code. I had to register the -t/--token-usage option, check if it was passed, and if it was, print the token usage property returned by the API. Simple enough. I also made sure to stick as closely as possible to the project’s code style.

I created the issue on GitHub and felt satisfied, but after seeing how detailed some of the issues other people created were, I feel like I should have gone into more detail. My Pull Request‘s description wasn’t particularly insightful either. I suppose I thought the changes were simple enough to not warrant an in-depth explanation.




Add token info feature

#6

Add a new command-line flag: --token-usage or -t. When the program is run with the --token-usage/-t flag set, extra information will be reported to stderr about the number of tokens that were sent in the prompt and returned in the completion.

I’d like to work on this feature. Please let me know if you have any specific implementation guidelines.




Add support for `–token-usage`/`-u` flag

#12

Fixes #6.

Some considerations:

  • I used the -u short flag since -t was in use.
  • This implementation does not print token usage if the -o flag is used.

Mayank looked at my Pull Request, decided it looked good, and merged my code. Problem is, the lab instructed us to ask the contributor to make changes to their PR so we could practice requesting changes through GitHub.

So I made an additional PR. One of the potential improvements I noted earlier was importing the program name and version from the package.json file for printing when using the --version flag, so when updating them you wouldn’t have to do it in multiple files. I turned in the pull request, being unsure what I’d be asked to change, and Mayank made the excellent observation that the program description could also be imported from the same file, and asked me to add the change to the PR, which I did.

Outcome

It was much later that I realized the feature I added was bugged. Instead of printing the total amount of tokens used for the entire command at the very end, it printed the tokens used after each input file was processed. This implementation works, but it’s not ideal, and definitely not what I intended. I let Mayank know, and luckily, he seemed okay with the implementation.

In hindsight, I wish I’d thoroughly tested my implementation before creating the pull request, but I’m glad it turned out alright.

Mayank’s contribution

Mayank added the feature to my project, and was kind enough to also work on a couple other issues.




Issue 10 – Add Token Usage Option

#11

Description:

This pull request adds a new --token-usage flag (with a shorthand -t option) to the program, allowing users to see the token usage information when making requests to the API. This flag enables the program to display the number of prompt tokens, completion tokens, and total tokens used by the request.

Changes Made:

  • Added both a long flag --token-usage and a short flag -t to the program for reporting token usage.
  • Added logic to check for the --token-usage flag in the program. If the flag is present and the response contains token usage data, the program will now extract and display it in the console using console.error.
  • The relevant data is extracted from chunk?.x_groq?.usage in the response, following GROQ API’s response structure.
  • Updated the README.md file to document the new --token-usage (-t) option.

Notes:

  • No breaking changes were introduced in this pull request.
  • This PR closes #10 by adding the requested token usage option.

Please let me know if any there are any additional changes required.




Replace writeFile with appendFile for file output

#13

Description:

This pull request modifies the program to append data to the output file when multiple input files are provided and the --output flag is used. Previously, the program would overwrite the output file with each input file’s data. Now, it appends the results from each input to the specified output file.

Changes Made:

  • Updated the logic to use fs.appendFile instead of fs.writeFile when multiple input files are passed along with the --output flag. This ensures that data from all input files is written sequentially to the same output file, rather than overwriting previous content.
  • Added logic to output a warning message if the output file is not empty. File is checked using fs.readFile for empty check.

This fixes #12 , let me know if further changes are required.




Remove logging output if output file flag passed

#14

If --output flag is passed the program will write only to output file, these changes resolve #8.

As I reviewed his pull requests, I saw the changes were pretty solid for the most part, barring slight code style inconsistencies. I requested the code be changed to match the existing style better, and once that was done, I merged the pull request.

Something interesting is that Mayank added text feedback to the program when outputting to a file. I initially chose not to do this, but after his suggestion, I became open to the idea.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
optimizing-quality-control-measures-with-minitab’s-predictive-analytics

Optimizing Quality Control Measures with Minitab’s Predictive Analytics

Next Post
project-management-vs-program-management

Project Management vs Program management

Related Posts