Council of LLMs for Testers

Welcome back! Let’s talk about something new today. It’s a fresh idea, maybe even a little futuristic. I’ve been experimenting with it myself. I call it the “Council of LLMs.” This novel concept could fundamentally change how we brainstorm at work and in testing. How we strategize. How we generate essential ideas. And how can we improve them.

What is this “Council of LLMs”?

Think of a trusted advisory board. One that’s always available, always alert. It’s filled with diverse minds, each offering unique insights and ready to help you solve your challenges. Now imagine this board isn’t made of people – it’s made of Large Language Models (LLMs), each with different strengths and specialties. That’s your Council of LLMs.
A simple idea, but incredibly powerful.

And while this might sound futuristic, it’s rooted in a very old, proven concept. Kings like Akbar the Great had their councils – groups of wise ministers and scholars who helped them make sense of complicated matters. Akbar had his Navratnas – nine jewels of intellect and art who guided his decisions. Modern governments do something similar with expert panels and advisory councils. These bodies bring multiple perspectives to the table to guide leadership and decision-making.

The Council of LLMs is the same idea – just updated for the digital age. Instead of scholars or ministers, you have specialized AI models. Instead of waiting for meetings, they’re ready when you are. The form has changed, but the function remains: collective wisdom for better thinking.

Are you new to the term LLM? A Large Language Model is a sophisticated AI. It learns from vast quantities of text data (~ 1000 GBs). Tools like ChatGPT, Gemini, Claude, and Grok are all examples of LLMs. You give them instructions, known as prompts. They process these prompts. Then, they generate remarkably human-like text in response, following your lead.

The “Council” idea advances this further. You don’t rely on just one AI’s viewpoint, which will be limited. No. Instead, you deliberately assemble a group, a collective of different LLMs. You present them all with the very same task, the same question, the same challenge you face. Then, you harness their combined outputs, gathering a pool of ideas far richer and more diverse than any single model could possibly generate working in isolation. It’s collaborative intelligence, amplified by AI.

Why explore such an approach? Dillema’s of a modern tester

Why introduce this concept now? Why is it relevant? Let’s start with some reality. The life of a tester often carries unique pressures. Many find themselves working solo on projects. They become the lone voice advocating for quality within a team. Time shrinks. The pressure to perform mounts relentlessly. Opportunities for genuine, collaborative brainstorming can be limited.

We understand a fundamental truth: the quality of our initial thinking profoundly impacts testing effectiveness. Our test plan matters. Our strategy dictates our path. The risks we anticipate shape our actions. Increasingly, success hinges less on pure coding ability and more on the sheer depth and breadth of our testing ideas. The Council of LLMs offers a tangible method to augment our cognitive efforts, providing support precisely when we feel isolated, stuck, or overwhelmed by the blank page.

Council of LLMs – Potential benefits

Using one LLM helps. But forming a council? That can offer distinct advantages. These benefits become particularly visible when facing complex or ambiguous tasks.

  • Diverse perspectives arise: Different LLMs possess unique training data. Their architectures vary. This inherent diversity means they approach the same problem from slightly different angles, offering you a wonderfully broad spectrum of suggestions for your test strategy or plan or idea bank.
  • Idea generation expands: One model might fixate on functional testing. Another could highlight non-functional concerns – performance, perhaps, or security vulnerabilities. Your council, working together, provides a more holistic, comprehensive view of the testing landscape.
  • Potential gaps identified: See where the models agree. Disagreement, however, can be incredibly revealing, possibly highlighting ambiguous requirements or pointing toward overlooked test scenarios demanding deeper investigation within your plan.
  • Blank Page Syndrome Diminishes: We’ve all been there. The council can break that inertia. It provides initial drafts. It suggests structures. It offers starting ideas, getting you moving faster.

Building and using your council of LLMs?

Step 1: Define your goal clearly

Clarity is key. What, specifically, do you want the council’s help with? Be precise. For instance, your goal could be:

  • “Outline a high-level test plan for the complete testing of the checkout process added to an e-commerce app (in xyz context).”
  • “Identify potential risks and corresponding test types for migrating user data to a new database.”
  • “Brainstorm key sections and essential considerations for a comprehensive flyer for a testing workshop on xyz topic.”

A sharp goal directs the AI, yielding more focused, and useful outputs.

Step 2: Craft a detailed, and insightful prompt

This is the most important step. Your prompt is the council’s instruction manual. Specificity breeds quality results. Think carefully when building prompts. Always include details like:

  1. What What do you want to achieve?
    Clearly describe the goal or the problem statement. This could be your initial idea or what you’re trying to figure out.
    Example:I want to build a test strategy for a mobile banking app.
  2. HowHow do you think this should be done?
    Include your approach or the strategy you think should be applied. This could be a specific model, method, or reference material you’d like to use.
    Example:Build the test strategy using the XYZ model, focusing on ABC criteria and PQR elements.
    This helps guide the AI toward structured, relevant solutions — not just generic answers.
  3. Context/ReferenceWhat background information will help?
    Provide supporting details like project stage, product type, known risks, or audience. You can also mention perspectives you want considered or output formats.
    Example:The app is currently in the beta stage. I want the strategy to consider functional, usability, and performance aspects. Output should be a bullet-pointed strategy doc.
  4. Constraints/GoalsWhat limits or success criteria should the AI know about?:
    Mention anything that affects the scope or direction of the task — like team size, time limits, or specific objectives.
    Example:Assume only one tester is available for 4 hours per day. The goal is to prioritize high-risk areas first.

💡 Pro tip: Think of your prompt like a detailed project brief for your LLM council. The more you include, the closer the output will match your real-world expectations.

Step 3: Engage with council members

Now, take that carefully crafted prompt. Input it into your chosen LLMs. Use several – perhaps ChatGPT, Gemini, Claude, or others you prefer. Critically, use the same prompt for each one. This consistency ensures you can accurately compare their unique responses later. Also, use the thinking mode or search mode if it’s available to you and required for your context. If you want to know what kind of models to choose, check the AI model selection for testers – Rahul’s Testing Titbits

Step 4: Gather the diverse outputs

Collect all the responses. Each LLM will have generated text based on your prompt. You will likely receive a fascinating variety of suggestions covering areas such as:

  • Potential risks or gaps identified in your initial idea.
  • Recommended ideas or fieldstones to refine your initial idea.
  • Possible tooling suggestions or possibilities.
  • Step by steps list of tasks for your plan or step by step plan to conquer the problem.

Step 5: You are the strategist – curate, synthesize, and refine!

This step remains paramount. It demands your expertise.
Remember: the LLMs provide raw material, possibilities, ingredients. You are the architect. You build the final strategy. You craft the definitive plan.

  • Analyze: Thoughtfully compare the different outputs. Where is the alignment? What are the points of divergence? Why?
  • Select: Judiciously choose the most relevant, valuable suggestions. Base this selection on your project’s specific context, its unique priorities, its real-world constraints. Discard generic fluff. Ignore impractical ideas.
  • Synthesize: Skillfully combine the best elements drawn from the various outputs. Weave them into a coherent, logical structure that makes sense for your goals.
  • Refine: Inject your own deep expertise. Fill in any remaining gaps. Adjust priorities based on your understanding. Tailor the language precisely.
  • Own: Create the final document or notes. Produce that definitive test plan. The council assists powerfully, yes, but the final product, its quality and effectiveness, remains yours.

The AI provides the suggestions; you provide the strategy, the wisdom, the final human touch.

Important considerations to remember

This approach is powerful. But remember these constraints always:

  • Data privacy is paramount: Never, ever input confidential project details into public LLMs. Avoid sensitive data. Steer clear of proprietary information. Stick strictly to non-confidential aspects for brainstorming and planning when using these tools. Protect your data.
  • Augmentation, not replacement, is the goal: These are tools. They assist your thinking; they do not replace it. Your judgment remains crucial. Your experience is invaluable. Your deep understanding of the context is irreplaceable. Remember, in the end The precursor of success with AI: You – Rahul’s Testing Titbits
  • Critical review is non-negotiable: Always evaluate the AI’s suggestions with a critical eye. Be skeptical. Suggestions might be generic. They could be incorrect. They might prove utterly unsuitable for your specific situation. Verify everything. Adapt constantly. Question always.

Summary

The Council of LLMs. It’s a novel way, a new path, for harnessing AI as a dynamic brainstorming partner. It helps testers generate diverse ideas. It assists in building more robust strategies. It’s particularly useful when time is short and resources are limited. It is fundamentally about augmenting human intelligence, enhancing our capabilities, not replacing the vital human element at the heart of thoughtful, effective software testing.

How will you use a council of LLMs in your work? Think about it. What kind of strategic tasks, those tricky ones, could you pose to them? Maybe use the AI usage for testers: Quadrants model – Rahul’s Testing Titbits

Give this approach an honest try. Experiment. You might be genuinely surprised by the creative and strategic results you achieve.

Enjoyed this post? Here’s what you can do next:

Thank you for reading! 😊

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top