Cross-functional reporting rarely breaks because the company picked the wrong data team structure. It breaks because the reporting workflow has three unresolved gaps: tools, latency, and ownership. Whether analysts service tickets in a centralized team or the analysts are embedded in the business units, stakeholders may experience late reports, conflicting answers from different systems, and no one who is explicitly accountable for the final number provided to the stakeholder.
The data team structure does not eliminate these problems. Work may be distributed among data teams by function (data engineering, data science, etc.), by business area (marketing, sales, finance, etc.), by project, or by a hybrid approach (both business area and function). This indicates organizations are distributing their data practitioners deeper within their organizations.
Why this matters
Organizations have been reorganizing their data teams for years, trying to solve reporting problems that continue to exist. The centralized model created a queue. The distributed model of embedded analysts shortens the queue but introduces two new problems: inconsistent definitions of words such as "revenue" across departments, and knowledge fragmentation when teams are disbanded.
The dbt Labs 2025 State of Analytics Engineering found an increased use of the hybrid model, which was also true in the 2024 survey. It is a combination of a centralized data team with data analysts also located in the business areas.
The symptom is when stakeholders sit in cross-functional meetings where finance has one number, operations has another, and no one can immediately explain why. This is where analysis-layer tools matter. If the logic, data, and stakeholder-facing output live in separate environments, the organization's structure has to compensate for workflow design. Tools like Quadratic are designed for the opposite pattern: keep live data, analytical logic, AI assistance, and stakeholder-facing outputs in the same workspace.
The tools, latency, and ownership issues compound each other, but most organizations do not address them separately:
- A tools gap between the environments where data is produced and where decisions are made
- A latency problem that delivers answers after they were needed
- A last-mile ownership gap where no one is explicitly accountable to provide the actual number the stakeholder needs
A faster data pipeline does not fix the last-mile gap. Embedding an analyst in marketing does not fix the fact that marketing and finance define revenue differently. Naming the three problems separately is the only starting point from which any of them becomes addressable.
The following table describes three common failure points in data management, their visible symptoms, and what standard surface-level fixes fail to address the root cause.
| Failure point | What it looks like | What does not solve it |
|---|---|---|
| Tools gap | Logic lives in one tool, decisions happen in another | Reorganizing the data team |
| Latency problem | Reports arrive after the decision window | Faster pipelines alone |
| Last-mile ownership gap | No one owns the final stakeholder-ready answer | More dashboards alone |
Core analysis of the three causes
The tools gap: two worlds that do not connect
The first place cross-functional reporting breaks is the gap between the environment where data professionals work and the environment where business stakeholders make decisions.
In distributed teams, this gap is easy to miss because the analyst is already inside the business unit. But an embedded analyst who builds a model in Python for the finance team is still producing outputs that the marketing team cannot interrogate or reproduce without starting over. The embedded analyst is closer to only a part of the business when the output is meant for the business as a whole.
In centralized teams, the gap is more visible: data teams live in SQL, Python, and data warehouses while stakeholders live in spreadsheets and slide decks. The distance between those environments is where requests accumulate and answers diverge.
A 2024 survey of 232 data practitioners by the Modern Data Company and the Modern Data 101 community found that 68% of practitioners spend the majority of their time simply trying to understand what the business needs, and 60% of practitioners regularly must rework their tables, dashboards, and data products because they are not meeting business needs. The same survey found that 70% work across more than five tools or three vendors for data quality and dashboards, and 40% spend more than 30% of their time keeping those tools working together.
One respondent described the practical consequence: when stakeholders cannot explain why three systems produce three answers, trust shifts from the data system to whichever answer is easiest to defend.
That trust problem can appear as both a methodology and a governance issue, even though they may fundamentally be the same issue.
In centralized teams, the divergence is usually methodological: the data team runs a query with one set of filters and joins, the stakeholder rebuilds the same question in a different spreadsheet with different filters or joins, and the answers diverge without anyone realizing the underlying logic was never the same.
In distributed teams, the divergence may be more obvious as a lack of cross-functional governance: two embedded analysts in different departments each build a revenue report without realizing the two departments have different definitions of a metric such as revenue.
Also, a 2025 State of BI survey, commissioned by the BI vendor Sigma Computing, found that 71% of respondents said their BI tools are not keeping pace with their data volumes. The tool fragmentation problem is not shrinking as organizations restructure. It is growing with the increase in data and analysis demands. Because this is vendor-commissioned research, it should not be treated as a neutral market measurement. Still, it is directionally useful evidence that tool fragmentation and scaling pressure remain live problems.
The latency problem: reports that arrive after the decision
The second place reporting breaks is the gap between when the data was collected and when a decision needs to be made. This problem looks different across the centralized and distributed models, but it does not go away in either one.
In centralized teams, latency is primarily a queue problem. The bottleneck is the capacity of the team between the data and the decision, not the speed of the pipeline. Integrate.io reported a survey in January 2026 of 104 data professionals on data team size, reporting lines, and challenges. It found that headcount and alignment remain the biggest barriers to data team impact, with organizational challenges consistently outweighing technical ones.
In distributed teams, queue latency shrinks because the analyst is already in the room. But timing (cadence) misalignment replaces it as the problem. A finance team on a monthly close has different data freshness requirements than a sales team tracking pipelines daily. When a cross-functional review requires both teams to report on the same period, the finance analyst may be working from data exported at month-end while the sales analyst refreshed theirs that morning. Neither number is wrong on its own terms. Together, they produce a shared view of the business that reflects two different moments in time, and the cross-functional meeting spends its first twenty minutes resolving a timing discrepancy.
A Fivetran global survey found that 85% of the respondents say stale data has contributed to bad decisions and lost revenue. Gartner estimates poor data quality costs organizations an average of $12.9 million annually. Both figures measure data quality broadly rather than reporting latency specifically, but they establish the cost of the underlying condition.
The last-mile ownership problem: nobody owns the final answer
The third failure point is the hardest to see because it looks like a people problem when it is actually a design problem. In most cross-functional reporting environments, no one is explicitly accountable for the gap between the data that exists in the warehouse and the specific answer a specific stakeholder needs right now.
In centralized teams, this gap sits between the data team and the stakeholder. Data teams own the infrastructure. BI teams own the dashboards. Business analysts own their own models. The final step of applying the specific logic this question requires falls between all three.
The dbt Labs 2025 State of Analytics Engineering surveyed 459 practitioners between October and December 2024. It found that 65% said enabling non-technical users to create transformed and governed data sets would somewhat or greatly improve data value and efficiency. Data professionals themselves recognize that the last mile is not being covered.
In distributed teams, the last-mile gap takes a different form. For example, marketing defines a converted customer as the moment a lead accepts an offer. Finance defines it as the moment cash clears. Even if analysts in different departments start with the same definition of a metric, over time the definition may be changed in different ways. Without a centralized function governing shared definitions, the definitions of revenue, churn, or conversion may drift into significantly different definitions.
Further, if the departmental teams disband, the institutional knowledge of how those definitions were constructed leaves with them. The next team inherits metrics whose origins no one can trace.
When last-mile requests take too long or produce conflicting answers in either model, stakeholders stop trusting the system. Then data analysis becomes shadow IT. Stakeholders build their own answers in spreadsheets disconnected from the warehouse, exports are refreshed manually, and models are created that no one else can verify. For these stakeholders, working around the system feels faster than working through it.
Practical implications
Do not start by asking whether the data team should be centralized or embedded. Start by asking which failure mode is active. The right intervention looks different depending on the organizational model. Four common issues have specific diagnostic actions.
- If reports are late, diagnose latency.
- If numbers conflict, diagnose definitions and logic visibility.
- If stakeholders keep rebuilding reports, diagnose last-mile ownership.
- If work disappears when teams change, diagnose persistence and handoff.
For centralized teams, the tools gap narrows when the data team's logic persists in a form that stakeholders can read and reuse without requesting a new version each time. The goal is to make the reasoning behind every answer available without a data team intermediary. Latency drops through better self-service access, not more headcount. Last-mile ownership improves when non-technical stakeholders can directly access live data with built-in governance.
For distributed teams, the tools gap requires shared metric definitions and shared analytical environments across embedded analysts. Then the same question produces the same answer regardless of which department asks it. That requires a governance layer that does not depend on a centralized team to enforce it. The logic itself has to be visible and inspectable, not locked inside individual analysts' environments.
Timing alignment requires explicit agreement on shared refresh schedules — which team's cadence governs a shared report, and what happens when one team's data is more current than another's. Metric definition alignment requires making analytical logic persistent in a form accessible to all members of a cross-functional team, so that when a team disbands the definition does not leave with the analyst who built it. The last-mile latency gap is closed by democratizing access to the data so that non-technical stakeholders can directly access the current answer to a question without waiting for an analyst to be available.
How Quadratic solves these problems
The three problems compound each other, which means a solution that addresses only one of them will not succeed. Quadratic is designed to make all three addressable together.
The foundation is persistence and Quadratic stores the AI's conversations in a chat history and the AI's outputs in spreadsheet cells. For example, when an analyst has the AI write Python or SQL, the code stays in the spreadsheet cell for examination. The analyst can review the code, edit the code, or write the code from scratch.
In Quadratic, a centralized team's logic becomes readable by the stakeholders who need to use it, without a translation step each time the question recurs. An embedded analyst's methodology becomes visible to any department that needs to verify they are measuring the same thing the same way. The logic that produced the answer is part of the answer. It is not locked away in a BI tool that requires data team access.
For example, an analyst can have one huge Quadratic file that contains many SQL queries to Mixpanel, Stripe, Google Ads, etc. Use of the embedded AI enables easy understanding of the underlying data structure of these platforms. It also ensures consistency across all of my Mixpanel queries, so the user can be confident it's pulling the right data.
In this real-world example, the analyst examined the SQL and proved that it was correct. Then he had the AI write Python to blend the data sources together and find insights. This is the benefit of cross-functional collaboration in Quadratic. A teammate could come in and use AI to generate Python or Formulas (or write them manually) that blend the raw data in a different way that answers their ad-hoc question.
There is a crucial benefit to doing this in Quadratic. Their answers will be correct because all the underlying logic on the raw data has been verified by the analyst. If they have questions on how the data is being pulled from these various sources, they can ask the AI to explain the logic behind the SQL.
Visibility into logic is one side of the observability problem. The other side is visibility into the data itself. Data observability means being able to monitor whether data is fresh, complete, and behaving as expected. It allows problems to be caught or eliminated before they reach a stakeholder rather than after. Analyses can run directly against live connections, and then any user can examine the data and how it was calculated. This enables surfacing anomalies at the source rather than in a meeting. A number that looks wrong can be traced immediately to the query that produced it. That is a materially different experience from being in a meeting and discovering a problem in a dashboard with unavailable underlying logic.
Persistence only matters if the data behind it is current. Quadratic provides four options for updates: (1) update a specific cell/table, (2) update the entire tab, (3) update the entire file manually, or (4) set Quadratic to update automatically in the background on a schedule (e.g., daily, weekly, weekdays, custom schedule, etc.). Plaid can be used to keep data live for either business or personal use. Scheduled tasks extend this further: a report that needs to run every Monday runs automatically without anyone initiating it.
For example, you can define a scheduled task that keeps a dashboard updated automatically. You can include sources accessed through APIs, databases, and software connections. All downstream analysis, when built off of the underlying raw data, updates automatically when the raw data does. Then a cross-functional dashboard does not show current pipeline numbers alongside last month's revenue figures. There is no separate export-import cycle, which means there is no gap where data goes stale between builds.
The last-mile problem requires one more thing: lowering the floor of what a non-technical stakeholder can do without routing a request through the data team. This means keeping the logic transparent enough that stakeholders do not require the data team for every request. The stakeholder can use the spreadsheet directly, hand it to an analyst for review, or modify it when the question evolves. Quadratic's AI operates as a visible collaborator with full tool support.
To summarize the problem, persistent code without live data is a well-documented stale answer. Live data without persistent logic is an answer nobody can verify or reuse. Natural language access without both produces outputs that stakeholders cannot trust because they cannot see how they were calculated.
In contrast, Quadratic addresses and solves the problems of cross-functional reporting by providing visibility to data practitioners and non-technical stakeholders for the entire workflow. The data and the logic can be traced from the original physical sources to the logic of the analyses to the numbers and graphic visuals that display the results.
Conclusion
In summary, cross-functional reporting breaks at three specific points that most organizations conflate into one problem. The reporting breaks whether the data team is centralized or distributed. Each of the three causes (the tools gap, the latency problem, and the last-mile ownership gap) normally requires a different intervention. Addressing one while ignoring the other two does not fix the meeting where finance provides one number and operations provides another, and they are supposedly for the same metric.
Quadratic is built for teams that need to address all three failure modes in the same workflow: live data, persistent logic, shared context, and stakeholder-facing outputs in one browser-based workspace. Analysts and stakeholders use the same tool through the web to access live data and get the answers they need now. If cross-functional reporting is breaking at the handoff between data, logic, and decisions, request a demo of Quadratic.