Executive summary
Claude in Excel (CiE) brings Claude's AI assistance into Excel as an add-in, and it's strongest for day-to-day spreadsheet tasks where speed matters: cleaning messy data, generating and explaining formulas, tracing dependencies across complex workbooks, and producing quick summaries.
For repeatable reporting workflows—dashboards, models, or recurring reports that need to be updated, revisited, and maintained over time—CiE can work, but with an important constraint: its chat reasoning does not persist between sessions, creating auditability and handoff gaps unless you build workarounds.
For automation, governance, or integration-heavy workflows at scale (VBA/Power Query-driven processes, scheduled runs, and auditable pipelines), CiE's architectural limits are often significant enough to make it a poor fit for core parts of the workflow.
The tool can work well for Excel users who work in self-contained tasks, value speed over traceability, and are not building processes that others will maintain or audit. It is less well matched to users whose workflows span multiple sessions, require reproducible logic, or depend on automation that runs without a user present. The three-tier user framework defined in this buyer's guide maps those distinctions precisely. This grounds the decision to use, partially use, or avoid CiE in what the work actually requires.
Why this matters now
The volume of AI announcements in the spreadsheet space has made it genuinely difficult to know where to invest your time and money. In the last few months, major vendors have added AI capabilities to their spreadsheets, and each announcement arrives with similar claims: faster analysis, smarter formulas, and easier workflows. For teams evaluating tools or deciding how deeply to integrate AI into existing processes, the noise is not a minor inconvenience. It is a real decision risk.
The cost of getting it wrong runs in two directions. Teams that dismiss AI entirely miss genuine productivity gains in areas like formula generation, error tracing, and data cleaning. These capabilities are reliable and available now. But teams that adopt features without understanding their limits end up with processes that have problems. For example, the AI may automate a task, but it requires the same human verification it was supposed to eliminate. A formula created through AI conversation may be built on reasoning that is unavailable to examine when something goes wrong.
The shift happening underneath all of this is worth naming directly. Spreadsheets are evolving from passive tools that wait for instructions into platforms that can act on a schedule, connect to live data sources, and preserve the reasoning behind every result. Not every AI spreadsheet product has made that transition.
What "Claude for Excel" is
AI is built into spreadsheets using two fundamentally different design approaches. Which approach a tool uses explains why certain limitations are structural rather than bugs to be fixed in the next update.
Add-in vs. native AI (what that changes)
The first approach is to add AI as an external layer on top of an existing spreadsheet application. Claude in Excel (CiE) is the clearest current example. CiE installs through Excel's add-ins menu and accesses your existing Claude account. It operates as a sidebar alongside the spreadsheet, reading the workbook and taking actions within it, but running as a separate process rather than as a component of Excel itself. It supports .xlsx and .xlsm files and preserves formulas, cell relationships, and existing formatting.
The second approach is to build AI in as a native component from the start. In this model, the AI layer, the spreadsheet engine, and the data connections share the same environment. They were not integrated after the fact because they were designed together. The practical consequence is that the native AI has access to the full capability surface of the application, which may be much more than the operations the add-in vendor has enabled.
The distinction matters most when a user reaches the edges of what the add-in can do. An add-in's capabilities are defined by what the vendor has built in and what the host application exposes through its API. In contrast, a native AI's capabilities are defined by what the application itself supports.
Installation and account requirements (and where CiE can fail)
CiE is installed through the Microsoft Marketplace via Excel's add-ins menu. It requires an active Claude account at the Pro, Max, Team, or Enterprise tier. For organizational deployments, IT admin deployment through the Microsoft 365 Admin Center is needed. If an organization has disabled "Let users access the Office Store," IT must use the manifest XML deployment method. For individuals, the installation process is straightforward when it works. However, the number of troubleshooting workarounds documented on the web indicates installation may be problematic. That is worth checking before beginning a team rollout.
The workflow maturity framework (who it's for)
Not every spreadsheet user needs the same things from AI. The features that matter and the limitations that create problems depend almost entirely on the complexity of the work being done. Three usage tiers define the relevant distinctions. They are anchored to workflow complexity rather than technical sophistication.
Tier A: convenience users
Tier A work is ad hoc analysis: quick cleanup, basic pivots and charts, and simple summaries. The success metric is faster answers with minimal setup. For Excel users already working in this range, CiE delivers immediately and reliably. Formula generation from natural language, error tracing through complex dependency chains, and data cleaning across inconsistent formats are all available. The risk is low because the output is verifiable before you accept it. This is AI accelerating work that the user already knows how to do, and the session-persistence limitation barely registers because each task is self-contained.
Tier B: builder users
Tier B work involves repeatable workflows: advanced formulas, structured models, custom dashboards, and refreshable reporting. The success metric is a reliable, reusable analysis that holds up over time. This is where the add-in pattern starts to show its limits. The reasoning behind a formula or model is difficult to recover when it was built through a conversation that CiE does not store between sessions. A user who opens the workbook the next day, or a colleague who inherits it, faces internal logic that cannot be interrogated by continuing the original chat.
CiE does offer a logging function that prints a record of actions to a sheet in the workbook. Whether that log captures (1) the reasoning behind those actions or (2) only the cell-level changes is a question to verify before treating it as a full audit trail. Tier B users can work productively with CiE, but they need to deliberately account for this gap.
If this starts to feel like rework—rebuilding context each session or handing models across teammates—it may be worth using an AI-native spreadsheet where conversations and logic persist alongside the workbook (for example, Quadratic).
Tier C: power users
Tier C work involves automation, integration, and governance: VBA, Power Query, complex models at scale, auditability, and integration with other systems. The success metric is control, traceability, and automation that runs without someone present to run it. CiE runs into hard architectural limits at this tier. As of early 2026, CiE does not support VBA, macros, and data tables within the AI layer. The same Anthropic documentation describes CiE as a beta release, which means its capabilities are actively changing. The most recent support documentation update was in March 2026. For Tier C users whose work requires any of these capabilities inside the AI workflow, CiE is not the answer for that portion of the work.
The tier framework also defines how much traceability matters. A Tier A user who needs a quick chart is well served by a tool that generates it accurately. A Tier C user building a compliance model that will be audited cannot accept the absence of reasoning that supports the recorded actions. Explainability and reproducibility are not uniform requirements across all users. They increase with tier, and the tool needs to match.
What Claude in Excel does well
Across all three user tiers, certain capabilities hold up consistently in real use.
Formula generation and explanation
Formula generation and explanation are the most widely used and most understood. AI generates formulas from natural language descriptions, explains what existing formulas do, and traces errors back to their source. A user who inherits a workbook with 30 tabs, 200 formulas, no documentation, and a deadline can ask the AI to explain what each formula does in plain English, trace dependencies across sheets, and map how data flows through the workbook. That is a genuine capability that saves real time, and it is available to users at every paid tier.
Data cleaning and normalization
Data cleaning holds up equally well. When a spreadsheet arrives with dates in five different formats, names split inconsistently across columns, and duplicate rows throughout, the cleanup is hours of manual work. AI handles it reliably because the task is pattern recognition applied to repetitive structure. The verification responsibility does not go away, but the time required drops significantly.
Document and PDF extraction (what's useful, what to verify)
PDF extraction is one of the least glamorous and most immediately useful recent developments. Upload a receipt or a financial report, and the AI understands the document structure and imports the structured data. This eliminates the manual copy-paste-clean chore. Multiple platforms support this as a reliable and practically useful capability. The appropriate caution is that extracted data should be spot-checked, particularly for financial documents where the source formatting may be inconsistent.
What breaks (and why)
Session persistence and auditability (chat history and reasoning)
Anthropic's support documentation for CiE states that "chat history is not saved between sessions" and that "each time you open the add-in, you start a fresh conversation with Claude." Every conversation about the workbook, every instruction refined through an hour of work, and every decision about how to handle an edge case is gone when the file closes.
A separate auditability gap affects enterprise users specifically. The Anthropic's documentation states that observability and auditability are not currently available for Claude in Excel, that CiE does not inherit custom data retention settings an organization may have configured, and that it is not included in Enterprise audit logs or the Compliance API. For organizations in regulated industries or with formal audit requirements, this is not a workaround problem. It is a hard limit.
A related issue is auto-compaction. In long sessions, Claude automatically summarizes earlier exchanges to make room for new ones, discarding detail in the process. This is acknowledged in Anthropic's documentation and has become a standard warning in the enthusiast community. The important detail is that the compaction is silent. There is no notification when it happens and no record of what was lost. The first sign that compaction has affected a session may be that Claude contradicts a decision made an hour earlier without knowing it is doing so.
The session log that a user can turn on partially mitigates some of these issues. It captures a record of actions taken. Whether it captures any or enough reasoning behind those actions needs to be verified before treating it as a full audit trail.
Automation surface area (VBA, Power Query, Power Pivot, and more)
CiE's layered add-in approach means that Anthropic must specifically enable each Excel-native operation. As of March 2026, the support documentation states that Claude can apply a range of Excel-native operations directly, including sorting and filtering data, editing pivot tables and charts, applying conditional formatting rules, setting data validation, and preparing workbooks for printing with finance-specific formatting tools. Each capability on that list had to be built and granted individually. It is not the same as native access to the full Excel object model.
For users whose goal is to build processes that run without them, such as a monthly report that generates on a schedule, CiE is not the answer. Instead, the user writes Excel scripts that run on files saved in SharePoint or OneDrive.
One operational consideration is relevant specifically for Tier B and Tier C users: Anthropic documents that MCP connectors configured in your Claude account activate automatically inside Excel. For a user whose Claude account includes connections to financial data providers such as S&P Global, LSEG, Pitchbook, or Moody's, this means data from those sources can move into Excel workbooks through the AI layer. This may be an important constraint for users managing workbooks in environments with data governance requirements.
This is another structural consequence of the add-in approach: capabilities expand only as the vendor explicitly enables more operations, rather than inheriting the full surface area of the spreadsheet environment. AI-native tools can expose broader programmability and scheduling in the same workspace, which is why Quadratic is better suited once the work shifts from 'assist me now' to 'run this reliably every week.'
Reproducibility and traceability (what you can and cannot revisit)
Financial modeling from scratch is one of the most promoted CiE use cases and one of the most genuinely risky. AI can generate a working model structure with functional formulas faster than most analysts can build one manually. The more fundamental issue is what happens when the model needs to change. A model built in a conversation that no longer exists, using assumptions refined through exchanges Claude cannot remember, is difficult to modify correctly. The analyst or colleague who opens the file the next day cannot interrogate the chat reasoning used to build the model. The model persists, and if the log is turned on, a record of actions during the session persists with it. The conversation that created them does not.
Choose Quadratic if any of these are true
There are five reasons to choose Quadratic rather than CiE if any of them apply to your workflows. They apply to all three user tiers (Convenience, Builder, and Power):
- You need the reasoning to persist across sessions.
- More than one person will maintain the file.
- You want repeatable reporting that refreshes on a schedule.
- You need to combine data sources or use code for analysis.
- You're spending time on exports/CSV workflows and rebuilding the same analysis.
- Your analysis depends on large datasets or complex joins that become brittle or slow in traditional spreadsheets.
- You care about auditability and traceability.
Although CiE is fine for simple tasks, Quadratic is simpler in the long term. For example, if you expect to reuse the workflow, share it, or build it into reporting, start with Quadratic so you don't rebuild later.
Quadratic is built on the native-AI model. The AI layer, code cells, and data connections were designed in the same environment from the beginning. There is no add-in to install and no separate account to manage. Complete chat sessions are stored in chat history, which means the reasoning behind any analytical conversation is available to return to and continue.
When native AI matters (persistence, traceability, flexibility)
The practical consequence shows up most clearly for Tier B and Tier C users. A Tier B user whose reporting model needs to be maintained, modified, and understood by more than one person has access to the full conversation history that built it. A Tier C user who needs automation can use Quadratic's scheduled tasks to build processes that run on a defined schedule and deliver outputs without anyone present to trigger them. The reasoning behind every result is available to inspect, whether generated through natural language, Python, SQL, or traditional formulas.
On the question of formula fluency: Quadratic preserves everything a user already knows about Excel formulas, including relative A1 notation, absolute locking, and named ranges. A user can see the code behind a result, modify it, and run it again. A user can ask for the Excel formula for a cell and the AI will provide the formula if it exists (e.g., an average), or will explain how to use tools in Excel if it does not (e.g., sentiment analysis). An example is in the Appendix.
What you can do when you outgrow add-ins
The add-in model has a hard ceiling defined by what the host application exposes and what the add-in vendor has enabled. The work outgrows that ceiling when, for example, the absence of session reasoning is a workflow problem or its auditability is a requirement. Then the answer is not a better add-in. It is a different architecture. Quadratic is the direct upgrade path for Excel users who have hit those limits, because the transition preserves existing spreadsheet skills while removing the architectural constraints that created them.
Your raw data, your existing spreadsheets, and your spreadsheet formula skills transfer instantly from Excel or Sheets to Quadratic. Request a demo
Appendix
One concern specific to finance users who are power users of Excel is worth naming directly. A common objection to AI-native spreadsheet tools is that finance professionals do not trust Python because they do not know it. Quadratic addresses this in two ways. First, the platform's AI can write formulas and code (Python and SQL). Users who prefer to work in familiar Excel-style syntax can do it, and power users who want to see or write code can do that.
Second, when a result has been calculated using code, you can ask the AI to show you the equivalent Excel formula (if it exists) and explain exactly what the code is doing in plain English. The reasoning behind any result is available, not hidden. Two examples show the AI's output when asked for the equivalent Excel formula and it did or did not exist.
This was the AI's response in a financial analysis template when there is an equivalent Excel formula:
"The value in H14 is the percentage change (pct_change) of the closing price. It's calculated as:
=(F14-F13)/F13This takes the current close price (F14 = 176.3) minus the previous close price (F13 = 179.36), divided by the previous close price, yielding approximately -0.01706 (or about -1.71%)."
This was the AI's response in the sentiment analysis template when there is NOT an equivalent Excel formula:
"Cell H29 contains the Positive sentiment score (currently 0.89) computed by NLTK's VADER SentimentIntensityAnalyzer.
There is no direct Excel formula equivalent. …
This kind of NLP processing is beyond what Excel formulas can do natively. To replicate it in Excel, you would need:
- Excel + VBA macro calling a Python script or API
- Excel + Power Query connecting to an external sentiment analysis API (e.g., Azure Cognitive Services, Google Cloud NLP)
- Excel Labs' Python in Excel feature (which could run the same NLTK code)
In short, this is a task that requires a programming language or external service — it can't be reduced to a spreadsheet formula."
These examples demonstrate the difference between a tool that uses AI to manipulate numbers and a tool that unlocks the potential of AI to go far beyond traditional number crunching.