Cole Stark, Head of Growth
Jul 29, 2025

Excel is no longer just formulas and pivot tables; it’s an arena for competitive Excel athletes battling to become the next champion. The Financial Modeling World Cup (FMWC) has even turned spreadsheets into an esport, culminating in the Microsoft Excel World Championship broadcast on ESPN.
Our latest video puts this spirit of the Excel World Championship to the test by racing two AIs, Quadratic (an AI‑native, browser‑based spreadsheet) and Microsoft Copilot in Excel, through an official FMWC sample case. See the two AI spreadsheet tools face off here:
Below is a recap of the competition, sprinkled with insights for anyone following the Excel competition scene or exploring the next wave of AI in Excel or other spreadsheets.
The competition format
FMWC cases are mini‑models drawn from the same playbook used in the Excel Financial Modeling World Cup stages. Each case mirrors the pressure of a real tournament: limited time, grading by accuracy, and a complicated spreadsheet task.
For this showdown, we:
- Imported the PDF case (questions on loans, depreciation, valuation - classic financial modeling world cup cases).
- Extracted all seven questions with Quadratic, then exported them to an .xlsx file so both tools started with identical data.
- Prompted each AI to “Answer all of the questions.” Quadratic wrote Python directly in‑grid; Copilot generated formulas we had to paste manually.
- Checked answers against the official solution sheet.
Scoreboard (Quadratic vs. Copilot in Excel)
Metric | Quadratic | Copilot in Excel |
---|---|---|
Time to finish | 21 seconds | 6 minutes, 11 seconds |
Accuracy | 100% (7/7) | 57.1% (4/7) |
Number of prompts | 1 | 11 |
Quadratic’s single‑prompt, code‑first approach mirrors how elite players in the Microsoft Excel Championship blast through models with custom VBA, except here the AI does the heavy lifting.
What we learned about AI in competitive Excel challenges
Context is king
When asked to solve the case, Quadratic correctly pieced all seven questions and their multiple‑choice options together, showing full awareness of the sheet’s structure. Copilot initially recognized only the first three questions, and when prompted again, it even generated “template” answers to questions 4-7 which had no relevance to the actual questions on the sheet. We then had to paste Questions 4‑7 manually into the chat, which added time to the final score.
Quadratic’s ability to read all of the available data proved to be a huge advantage in the Financial Modeling World Cup‑style challenge.
What the AI writes
Quadratic inserted working Python code straight into the grid and paired it with a natural‑language summary of the answer in the sidebar. Copilot, by contrast, surfaces Excel formulas in the sidebar and asks you to copy‑paste them into the sheet.
Both AI text responses contained some incorrect information about the questions, which is expected because those responses do not use the actual code or formulas to answer the questions. Copilot even has a disclaimer (“AI‑generated content may be incorrect”), reminding users to trust the calculations rather than any verbal summary. The authoritative source of truth is the Python or Formula output, which is true for any AI tool.
Debugging workflow
Quadratic’s Code Chat lets you click into a Python cell and ask follow‑up questions like “Explain how you solved Question 1,” then generates a text response or annotates the code on the spot. Copilot offered a lighter‑weight experience on some of the questions: hover over a formula to reveal a dropdown explanation.
If the formula fails after pasting it into the sheet, however, you must re‑prompt Copilot and paste again, a slower loop when you’re racing the clock in a competitive Excel setting.
Speed matters
In a real financial modeling world cup Excel stage, shaving minutes is the difference between podium and elimination. Quadratic’s 21‑second solve would earn massive time‑bonus points under FMWC rules.
Why this matters beyond the FMWC arena
- Finance teams need reproducible models. Quadratic’s code cells provide audit trails.
- Students can get better at formulas and Python with AI guidance, yet still see the underlying math.
- Casual users have a more seamless spreadsheet experience, where the AI can understand and do anything that a user would do in the grid.
Conclusion
The Excel World Cup‑style face‑off made one reality clear: speed, context awareness, and a friction‑free debugging loop now separate tomorrow’s winners from yesterday’s spreadsheet champions. Quadratic’s AI solved a full Financial Modeling World Cup Excel case in seconds, wrote transparent Python, and left an audit trail fit for any finance team. Copilot in Excel showcased promising natural‑language guidance, but copy‑paste detours and context slips cost precious minutes, time that no competitor can spare in a real Excel competition.
Whether you’re aiming for the next Excel World Championship, sharpening your Python and formula skills, or just curious about the future of AI in spreadsheets, this head‑to‑head proves one thing: the spreadsheet wars are starting to heat up.