When to Code, When to Use LLMs: Striking the Right Balance for Efficiency
As an AI Engineer, it's crucial to discern when to use code, when to leverage Large Language Models (LLMs), or when to combine both to solve a problem efficiently.
I've noticed a trend where some developers default to using LLMs without considering simpler, more cost-effective coding solutions. For instance, I recently encountered a scenario where developers were scraping articles from Google RSS feeds but only wanted new articles about US companies. They looped over every article, sending each one to OpenAI to determine its relevanceāa process that can be both time-consuming and expensive.
There are several ways to optimize this:
-
Filter by Publication Date and Source Domain:
- Exclude articles older than a few days.
- Maintain a list of irrelevant news sources (e.g., non-US domains) and filter them out immediately.
-
Use Keyword Filtering with Caution:
- While effective, this method risks excluding relevant articles, so it should be used judiciously.
-
Check Against Existing Data:
- Before involving an LLM, compare new articles with your database to avoid duplicates.
After these steps, you can then use an LLM to assess the remaining articles' relevance. By refining your approach, you might include multiple articles in a single LLM prompt, reducing the number of requests and lowering costs.
The Takeaway:
Before defaulting to LLMs, consider whether traditional coding can address the problem. Often, a combination of smart coding practices and selective use of LLMs yields the most efficient and cost-effective solution.
Work smarter, not harder. š§