On 05 Feb 2026, Perplexity officially launched “Model Council,” a new multi-model research feature designed to answer a single question by consulting multiple frontier AI systems at the same time. In Perplexity’s own words: “Today we are launching Model Council, a multi-model research feature…”
The idea reflects a growing reality in modern AI: the best model depends on the task. Perplexity frames the core issue bluntly, “Our data show that model performance is increasingly varied across different tasks and questions.” Model Council is Perplexity’s product response: treat variability as a feature to manage rather than a nuisance to ignore.
1) What Perplexity’s Model Council is (and why it exists)
Model Council is Perplexity’s attempt to make AI research less about picking a “winner” model and more about comparing strong perspectives in parallel. Rather than forcing users to decide which model might be best for a given question, the system runs one query across three models at once and then consolidates what comes back.
Perplexity positions the feature around the fact that different models excel at different tasks: summarization, reasoning, creativity, math, coding, or sourcing nuances. The company’s stated motivation is performance variability across tasks, meaning that model A can outperform model B on one question, then lose on the next.
That variability is also a reliability issue. If a single model is overconfident or misses an angle, the user may not realize it. Model Council is designed to reduce those single-model blind spots by exposing agreement and disagreement between top systems as a signal about answer stability.
2) How Model Council works: three models in parallel + a synthesizer
At the core of Model Council is parallelism: Perplexity runs the same prompt “across three of the models… such as Claude Opus 4.6, GPT 5.2, and Gemini 3.0.” The lineup can vary, but the key is that the user receives multiple independent attempts at the same problem.
Those separate attempts are not shown as a messy pile of competing outputs by default. Perplexity says a “synthesizer model” then “reviews the outputs, resolves conflicts… and gives you one answer that shows where the models agree and where they differ.” In other words, the system aims to merge the benefits of ensemble thinking with a single readable result.
The “agree vs. differ” emphasis matters because it changes what users should look for. Convergence can be a confidence signal (multiple strong models independently reaching similar conclusions), while divergence can be a warning sign (the question may be ambiguous, data may be uncertain, or reasoning paths may conflict). Perplexity’s rationale is that multi-model comparison helps reduce overconfident errors that can slip through when only one model is consulted.
3) Product availability and access: web-only and exclusive to Max
Perplexity’s Help Center defines Model Council as “multi-model research” that “runs queries across three AI models simultaneously and synthesizes one unified answer.” At launch, it is “exclusively to Perplexity Max subscribers on web,” establishing both a platform limitation and a plan requirement.
Availability is currently limited: the Help Center states it is “currently available on web only… not yet available on mobile or desktop apps.” That constraint is echoed by external coverage as well, including a quote captured by Numerama from Perplexity’s social post: “Run three frontier models at once… Available now on web only for Perplexity Max subscribers.”
Access is intentionally straightforward in the web UI. Perplexity’s Help Center describes the steps as: Home (web) → click the plus (+) next to the search bar → select “Model Council.” That small workflow detail is important because it signals Model Council isn’t a separate product, it’s a mode inside the existing Perplexity search experience.
4) Controls and customization: choosing models and “Thinking” mode
Model Council isn’t locked to a single static trio. According to the Help Center, users “can toggle different models on or off,” letting them pick which three models participate. That supports practical strategies, like mixing a strong reasoning model with a strong creative model and a strong coding model depending on the task.
Per-model controls also include an optional “Thinking” toggle. Perplexity notes: “Each model also has a ‘Thinking’ toggle…” While Perplexity does not frame this as a magic switch, it implies users can opt into more deliberate, step-heavy processing behaviors when needed, typically at the cost of speed or compute.
These controls shift Model Council from being a simple “ensemble button” into something closer to a research cockpit. Instead of treating all questions the same way, users can tune the panel, leaning into careful reasoning for complex decisions, or turning on models optimized for breadth when brainstorming and exploring.
5) Pricing and eligibility: a premium reliability feature
Model Council is bundled with Perplexity Max, priced at $200/month or $2,000/year, per the Help Center. That pricing signals Perplexity is targeting heavy users, people who get material value from marginal improvements in accuracy, completeness, and confidence signaling.
Eligibility is explicitly narrow. The Help Center states it is “available to Max subscribers only” and “not available to Free, Pro, Education Pro, Enterprise Pro, or Enterprise Max…” tiers. In practice, that makes Model Council both a differentiator and a gating mechanism: access to multi-model comparisons becomes a premium capability.
This packaging choice also reflects the compute economics of running three frontier models for one query, plus a synthesizer pass on top. Even if users don’t see the infrastructure, the plan restriction communicates that this level of redundancy and synthesis is expensive, and valuable enough for Perplexity to anchor it to its highest-priced subscription.
6) When Perplexity recommends using Model Council
Perplexity’s Help Center points to several use cases: investment research, complex decisions, creative brainstorming, and verification. These categories share a common trait: the cost of being wrong is higher, or the space of possible answers is wide enough that one model’s first pass may miss key angles.
For investment research, the benefit is less about predicting markets and more about structured diligence, surfacing competing narratives, identifying missing variables, and spotting where models disagree on assumptions. If three strong systems converge on the same caveats, that convergence can be informative; if they diverge, it highlights what needs deeper sourcing.
For verification, Model Council functions like a consistency check. If a single model produces a confident claim, the user can test whether other frontier models corroborate it, challenge it, or reframe it. Perplexity’s stated rationale, “Every AI model has blind spots”, makes verification a first-class motivation rather than an afterthought.
7) Early coverage and positioning: reliability through triangulation
Media coverage quickly summarized the feature in terms of simultaneous multi-model execution. On 06 Feb 2026, Japanese tech outlet Innovatopia described Model Council as running three AI models at the same time to improve answer reliability, an interpretation aligned with Perplexity’s emphasis on convergence and disagreement as a quality signal.
French tech outlet Numerama likewise noted the 05 Feb 2026 announcement date and highlighted the “three models at the same time” concept. Numerama also relayed Perplexity’s social phrasing: “Run three frontier models at once… Available now on web only for Perplexity Max subscribers.” That quote effectively captures both the line promise (three models) and the immediate limitation (web-only, Max-only).
Together, these summaries reflect how Perplexity wants Model Council to be understood: less as a novelty and more as a practical research workflow. The pitch is that reliability doesn’t only come from having a stronger single model, but from triangulating across multiple strong models and making disagreements visible rather than hidden.
Model Council is a notable step in productizing a behavior that many power users already practice manually: asking the same question to different models and comparing results. By formalizing that workflow, three models in parallel plus a synthesizer that highlights agreement vs. disagreement, Perplexity is trying to make “cross-checking” feel native, fast, and readable.
In the near term, the feature’s impact will depend on how well the synthesizer resolves conflicts without papering over uncertainty, and how often model disagreement genuinely helps users make better decisions. But the direction is clear: as model performance becomes more task-dependent, Perplexity is betting that the future of AI research is not a single oracle, but a council, where reliability comes from comparison, synthesis, and transparent variance.