James Amoo, Community Partner
Oct 15, 2025

Structured Query Language (SQL) is a tool used by data analysts to query, update, and manage datasets. However, raw data alone is often insufficient to convey insights clearly; visualization is crucial. In fact, a study revealed that 75% of organizations use data visualization tools in analytics workflows. This shows the importance of charting with SQL and turning raw data into visual formats for better understanding.
To promote data accessibility and democratization across employees, analysts need to go beyond querying and embrace charting with SQL. That means not just working with data, but telling data stories through SQL data visualization.
In this blog post, we’ll explore how to turn SQL queries into clear and trustworthy charts. You’ll discover how to build charts with SQL, visualize key concepts, and use various SQL-based visualization platforms. We’ll also look at how modern tools like Quadratic make this process even easier by allowing you to generate interactive charts from SQL queries using plain language.
What makes a chart trustworthy?
These days, anyone can make a chart with the help of modern SQL data visualization tools. But here's the real question: Can you trust what the chart is showing?
Many AI tools like Claude and ChatGPT make it simple to drag-and-drop a CSV into the chat and generate visualizations. However, they often act as black boxes and hide the logic behind the visuals. If you can't trace or verify how a chart was built, then you’re just hoping it’s right. For data analysts, accuracy must come before aesthetics. More than just a graphic representation, a chart is a claim that should be clear and backed by correct data.
A chart is trustworthy when it accurately reflects the data behind it, tells the truth without distortion, and can be confidently interpreted by its audience without confusion. To create trustworthy charts, data analysts must first understand the underlying process behind visualization. This means ensuring the correct grain in the dataset, using valid joins and filters that don’t distort the data, and using the appropriate chart type.
Understanding grain in a dataset
A grain (also called granularity) refers to the level of detail represented by each row in a dataset. In other words, it specifies what a single row stands for, which is essential to know when creating charts with SQL. Granularity is a foundational concept in data modeling because it determines how data can be queried, aggregated, and visualized.
To build trustworthy charts with SQL, analysts need to know the grain of their data. Misunderstanding it can lead to common mistakes such as over-aggregation or mixing data from different grains. For example, combining customer attributes (user-level data) with website clicks (session-level data) in the same table. This can cause problems like double-counting metrics, resulting in misleading charts and inaccurate insights.
Choosing the right grain is critical for producing trustworthy charts. As a best practice, data should be stored at the finest possible grain. This allows more flexibility, as analysts can easily aggregate it later for specific reports or chart types without losing accuracy.
Handling time series data
Working with time series data requires careful attention to detail. First, you must ensure all your time columns conform to a consistent format. Also, it’s important to group data into clear reporting intervals using SQL functions like DATE_TRUNC
or EXTRACT
. This makes it easy to standardize timestamps and compare metrics across time periods for creating charts. Here’s an example:
SELECT DATE_TRUNC('month', created_at) AS month, COUNT(*) AS new_users FROM users GROUP BY 1 ORDER BY 1;
This query truncates the timestamp to the month-level for reporting. This instructs the database to round down each timestamp to the first day of its corresponding month, ensuring consistent formatting across all records.
When working with time series data, you should be mindful of incomplete periods (such as the current week or month), which can distort trends and produce misleading charts. For instance, including data from an ongoing month might falsely suggest a drop in performance simply because the month isn’t complete. To prevent this, you can either exclude partial periods from your analysis or explicitly label them to indicate that the data is still in progress.
Another common issue involves gaps in the time series. This occurs when no data is recorded for certain dates or intervals. These gaps can distort trends or make charts appear discontinuous. To handle this, you should fill or interpolate these gaps so charts can display consistent time intervals.
Finally, when working with time series data, you should be careful not to create misleading time charts. A common pitfall is plotting cumulative totals when the goal is to show growth or change over time. For example, a chart of total signups will always trend upward, even if new signups are declining during certain periods.
To handle this accurately, analysts should use non-cumulative metrics (such as daily or monthly counts) to convey this data. Alternatively, you can apply moving averages or percentage change calculations to smooth short-term fluctuations and understand growth patterns.
Charting with SQL: Understanding window functions
Window functions in SQL enable calculations across a set of rows that are related to the current row, without collapsing those rows into a single result like aggregate functions do. This means you retain the full detail of each row while still adding the required insights.
These functions are especially valuable when building chart-ready datasets. In data analysis and visualization, it's common to explore trends over time, rank items, calculate running totals, or measure percentages of a whole. Window functions make these tasks easier and more efficient by allowing you to combine raw data with analytical context in a single query. It helps you transform your data into a format suitable for visualization. Let's discuss some of these functions in detail:
ROW_NUMBER()
This function displays the number of a given row. It works by assigning a unique integer to each row within a partition of your data. It's used to label rows based on a certain order. Here's the syntax:
ROW_NUMBER() OVER ( PARTITION BYORDER BY )
PARTITION BY
divides the data into groups (or partitions), which allows you to begin counting from 1 in each partition. ORDER BY
determines the order in which rows are numbered in each partition.
Product managers can use this function for funnel analysis. Suppose you want to create a funnel chart to show how users move through steps. ROW_NUMBER()
can be used to get the first occurrence of each step of a user:
WITH step_rank AS ( SELECT user_id, step_name, event_time, ROW_NUMBER() OVER (PARTITION BY user_id, step_name ORDER BY event_time) AS step_order FROM user_events ) SELECT * FROM step_rank WHERE step_order = 1;
This allows you to create user steps and get insights into where users drop off in their sessions.
Let's see another example of how ROW_NUMBER()
helps with visualization. Suppose you want to visualize the top 3 products sold each month:
WITH ranked_sales AS ( SELECT product_id, sale_month, sales_amount, ROW_NUMBER() OVER (PARTITION BY sale_month ORDER BY sales_amount DESC) AS rank FROM monthly_sales ) SELECT * FROM ranked_sales WHERE rank <= 3;
This query filters the top 3 products in each month. This makes it easy to create charts for visualization.
RANK()
The RANK()
window function assigns a ranking number to each row within a partition. Unlike ROW_NUMBER()
, which assigns a unique value for each row, RANK() caters to situations where you may want to assign the same rank for tied or duplicate values. Here's the syntax:
RANK() OVER ( PARTITION BYORDER BY [ASC|DESC] )
PARTITION BY
breaks your dataset into separate groups, while ORDER BY
determines how the rows are ranked. Suppose you have the data below:
Product | Sales |
---|---|
Phone | 100 |
Laptop | 80 |
Monitor | 50 |
Printer | 80 |
Scanner | 40 |
Using the RANK()
window function:
SELECT product, sales, RANK() OVER (ORDER BY sales DESC) AS rank FROM sales_table;
Here’s the result:
Product | Sales | Rank |
---|---|---|
Phone | 100 | 1 |
Laptop | 80 | 2 |
Printer | 80 | 2 |
Monitor | 50 | 4 |
Scanner | 40 | 5 |
This function is useful when you want to also consider ties and reflect skipped positions. For example, if you want to create a leaderboard to show the ranking of scores:
SELECT user_id, score, RANK() OVER (ORDER BY score DESC) AS rank FROM game_scores;
This creates a leaderboard that reflects a fair ranking of scores.
SUM() OVER()
The SUM() OVER()
window function is used to calculate running totals, percentages, or moving averages without collapsing rows. It is ideal for trend analysis, running totals, and so on. For example, if you want to calculate how many users have signed up over time:
SELECT signup_date, COUNT(user_id) AS daily_signups, SUM(COUNT(user_id)) OVER (ORDER BY signup_date) AS cumulative_signups FROM users GROUP BY signup_date ORDER BY signup_date;
A line chart can then be used to visualize growth over time.
SQL to chart workflows
There are several pathways to creating SQL charts. In this section, we will discuss the several SQL to chart workflows, helping you choose the best SQL charting tool to turn your queries into charts.
BI tools
BI tools are platforms that allow users to analyze and visualize data through SQL queries. Data analysts can write SQL queries directly into a built-in query editor, preview the resulting dataset, and transform it into a visual format using a wide range of chart types. Once it is visualized, multiple charts can be organized into interactive SQL dashboards where analysts can customize the layout, apply filters for more exploration, or design a coherent reporting interface. These dashboards can then be saved and shared with stakeholders via secure links or embedded reports, ensuring that everyone in the organization has access.
Data analysis for business intelligence streamlines dashboard creation, high-level monitoring, and stakeholder reporting. Power BI and Tableau are two of the most popular BI tools in the industry, while other tools like Metabase, Mode, and Tableau also provide robust solutions for data visualization.
While BI tools are well-suited for creating interactive visualizations, a major limitation is that they’re not well-suited for more complex analytics, such as feature engineering, statistical modeling, and experimentation analysis. Additionally, performance can significantly slow down if queries are not optimized, resulting in a slower and less responsive SQL dashboard.
Notebooks
This is also a common pathway used by data analysts and product managers to chart with SQL. Unlike BI tools that are more focused on visualization and reporting, notebooks are designed for exploratory data analysis, allowing users to combine code, SQL queries, and visualization in one environment.
Analysts usually begin by querying data directly from a connected database within a notebook cell. These SQL queries are executed using built-in database connectors or extensions. Once the data is prepared, they can build interactive visualizations using libraries such as Matplotlib, Seaborn, or Plotly.
Compared to BI tools, notebooks provide higher flexibility in analysis and visualization, making them suited for other tasks like experiment analysis, A/B testing, and statistical computation. Cloud-based notebooks promote collaboration in teams by supporting real-time editing and version control.
One key drawback of this approach is that notebooks are less accessible to non-technical users, since they require setup and the ability to write code for advanced querying and visualization. Creating charts also tends to take more time, as it often involves manual coding and styling. For analysts who value more flexibility and control over their analytics and visualizations, the notebook-first approach is a good choice.
Spreadsheets
The spreadsheet-first approach is one of the most accessible ways to work with data, as it offers a familiar grid-based interface. This streamlines data exploration and analysis for both non-technical users, technical users, and citizen developers. However, most traditional spreadsheets do not natively support writing or executing SQL for data analysis. Tools like Excel and Google Sheets require connectors or plugins (such as Connected Sheets and Power Query) to pull data from databases.
While this setup enables basic SQL connectivity, it still feels somewhat limited and add-on dependent. This is where Quadratic, an AI-powered spreadsheet, stands out. Unlike traditional spreadsheets, Quadratic offers native SQL support, connection to multiple databases, and built-in AI assistance, allowing users to run queries and generate visualizations. Users who also want to create a graph from SQL will find Quadratic useful.
As an AI tool for data analysis, Quadratic provides a centralized environment where users can access, analyze, and visualize their data seamlessly. This saves users time spent juggling between SQL editors, BI tools, and database query tools. It offers an IDE for data analysis where users can write and execute code and SQL queries.
Thanks to its AI assistance, users do not need to be proficient in writing SQL queries to create charts. All it takes is a simple description of how you want your data to be presented, and Quadratic’s AI spreadsheet analysis technology generates the code to create the SQL chart. First, let’s see how we can pull data from a database using Quadratic’s built-in code editor:
Here, we pulled data from our database using SQL queries. Quadratic automatically displays the result in the spreadsheet when the query is executed. Before we get into creating charts from this data, let’s see how we can generate insights using text prompts:
To understand feature adoption in the product data, I asked Quadratic AI, “Which product features are used the most?” In seconds, it generated a ranked table showing the most frequently used features based on usage counts. Similarly, to create charts in Quadratic, you ask:
Here, we asked Quadratic to visualize the distribution of Operating System usage, and it instantly generated a pie chart to display the results. The charts created in Quadratic are fully customizable, allowing users to adjust styles, colors, and labels for clearer visualizations.
Excited to share insights and visualizations with teammates and other stakeholders? Quadratic provides a collaborative environment where users can simultaneously view, edit, and analyze data in real time. It also provides a smooth and highly responsive experience, thanks to its modern web app architecture.
Conclusion
The ultimate goal of working with data is to uncover insights that drive action, and reaching that goal quickly should be a top priority. After all, even the most thorough analysis loses value if stakeholders can’t easily interpret the results. Charting with SQL helps to translate complex queries into interactive SQL charts and graphs that make insights easier to understand and decisions faster to make.
In this blog post, we explored how to create trustworthy and insightful charts using SQL. We discussed the importance of understanding the grain of your dataset, managing time series data, and applying window functions to build charts. We also explored different workflows for transforming SQL queries into charts. Finally, we saw how Quadratic streamlines this entire process by allowing users to query, analyze, and visualize their data seamlessly through natural language prompts, all within a single environment.