Practical tips for developers

Created
Feb 28, 2024 12:24 PM
Tags
Software Engineering
Main page

Intro

Discover the secrets to becoming a more efficient developer with our article's practical tips and strategies. Learn how to leverage tools effectively, manage context, and iterate iteratively to achieve better results in less time.

image

Read the Documentation

Understanding the tool you're working with is crucial. Learn about its model, how it forms context, and what features it offers.

If you're using an IDE plugin, familiarize yourself with its specific implementation for your IDE. A tool may behave differently across various IDEs. Keep track of the change logs as these tools frequently update. You don't want to miss out on the latest features.

Refer to our list of 🗜️Tools for additional useful information about each one.

We will provide additional recommendations within the context of GitHub CopilotGitHub Copilot , which is one of the most popular tools.

Use Prompts

First, determine how each tool forms the prompt. For example, GitHub CopilotGitHub Copilot explains this process in its official blog post, How to use GitHub Copilot: Prompts, tips, and use cases.

It appears that by starting your file with a detailed task description, autocomplete performs better. The same principle applies to chat.

Prompting is crucial for complex tasks. To achieve the best results, describe the task as precisely as possible. For instance, our prompt for Write documentation from existing code was 2200 characters long. It incorporated many practices from Prompt Engineering tips , such as specifying the role, describing the audience, and breaking down the task into subtasks, etc.

Manage the Context

Context enhances the understanding of your task for both the assistant and the chat, leading to more relevant results. Depending on your tool and IDE, you can explicitly add certain parts of the codebase to the context with varying degrees of granularity.

Managing context is crucial. Adding too little context makes it difficult for the assistant to understand your task, while adding too much leads to a loss of attention to details.

Add examples

Providing examples can be extremely beneficial for more complex tasks. It gives the model a clear idea of what you're seeking. Just as you would with a human assistant, do the same with AI. The more specific your example, the better the result. 1-2 good examples can significantly cut down your time for prompt writing and tuning. This approach is known as few-shot prompting.

This approach has proven effective for various use cases, such as Write unit tests, Write comments for code and Refactor code

Work iteratively

Continuing from the previous point, an iterative approach can be helpful when the desired examples are not available. Initially, you might generate a low-quality example. You can then improve this example and make another request, adding this improved example to the prompt(single-shot prompt). This will likely yield a better, but not perfect, result. By repeating this procedure, the model will gradually produce more accurate results.

Divide large tasks into parts

Expecting the model to write a lengthy document or an entire class for you immediately? That's unlikely. The shorter the output, the higher its accuracy.

Start by composing a comprehensive task description, then break it down into steps. If you don't feel like doing it, you can even request chatGPT to write the steps for you. Once that's done, you can instruct it to perform a specific step from the overall task.

Repeat the request

Sometimes LLM gives a great answer the first time, other times you need to restart it 10 times without changing the prompt.

Build chains of thoughts

We experimented with the CodiumAICodiumAI tool, known for specializing in test writing. It outperformed GitHub CopilotGitHub Copilot due to its iterative approach.

Initially, CodiumAICodiumAI generates a set of test cases for a function and then writes each test individually. Conversely, GitHub CopilotGitHub Copilot attempts to write the entire test suite in a single request.

We decided to push our efficiency further.

Like GitHub CopilotGitHub Copilot , CodiumAICodiumAI struggles without a test example. To overcome this, we used Copilot to generate the first test, refactored it to meet our condition, and fed it to CodiumAICodiumAI as an example. This approach proved to be highly effective.