JSON to Table: From Data Structure to Interactive Tables
JSON to Table: From Data Structure to Interactive Tables#
When working with JSON data, staring at raw structures can be overwhelming—especially when nested objects and arrays are mixed together. Converting JSON to a table format makes everything clearer: columns are well-defined, data is aligned, and you can even sort and filter.
Online tool: https://jsokit.com/tools/json-to-table
Column Inference Algorithm#
The core challenge is automatically inferring column headers. JSON isn’t a relational database; there’s no fixed schema, so we must derive fields from the data itself.
The simplest case is an array of objects:
[
{ "name": "Alice", "age": 25, "city": "NYC" },
{ "name": "Bob", "age": 30, "city": "LA" }
]
Just grab all keys from the first object as column headers. But real-world data is rarely this clean:
[
{ "name": "Alice", "age": 25 },
{ "name": "Bob", "age": 30, "email": "bob@example.com" },
{ "name": "Charlie", "city": "Chicago" }
]
Different objects have different fields. We need a union algorithm:
function extractColumns(data) {
const allKeys = new Set();
data.forEach(item => {
if (typeof item === 'object' && item !== null) {
Object.keys(item).forEach(key => allKeys.add(key));
}
});
return Array.from(allKeys);
}
Using Set automatically deduplicates, then we convert to an array. Now [{"a":1}, {"b":2}] produces headers ["a", "b"]—all fields are shown.
But there’s a catch: field order is unpredictable. While ES6+ defines Object.keys() order (numeric keys ascending, then string keys in insertion order), business logic often demands specific ordering—like id first, createTime last.
Production code adds sorting logic:
function extractColumns(data) {
const allKeys = new Set();
data.forEach(item => {
if (typeof item === 'object' && item !== null) {
Object.keys(item).forEach(key => allKeys.add(key));
}
});
const columns = Array.from(allKeys);
// Priority sorting
const priority = ['id', 'name', 'title'];
columns.sort((a, b) => {
const aIdx = priority.indexOf(a);
const bIdx = priority.indexOf(b);
if (aIdx !== -1 && bIdx !== -1) return aIdx - bIdx;
if (aIdx !== -1) return -1;
if (bIdx !== -1) return 1;
return a.localeCompare(b);
});
return columns;
}
Flattening Nested Objects#
Real business JSON is rarely flat; it’s often nested multiple levels deep:
{
"user": {
"profile": {
"name": "Alice",
"age": 25
},
"settings": {
"theme": "dark",
"lang": "en-US"
}
}
}
Displaying this directly shows [object Object] in the user column—useless. We need flattening:
function flattenObject(obj, prefix = '', result = {}) {
for (const [key, value] of Object.entries(obj)) {
const newKey = prefix ? `${prefix}.${key}` : key;
if (typeof value === 'object' && value !== null && !Array.isArray(value)) {
flattenObject(value, newKey, result); // Recursively handle nested objects
} else {
result[newKey] = value;
}
}
return result;
}
// Result:
{
"user.profile.name": "Alice",
"user.profile.age": 25,
"user.settings.theme": "dark",
"user.settings.lang": "en-US"
}
Dot-separated paths like user.profile.name clearly express hierarchy.
But flattening has edge cases:
-
Nested arrays: How to handle
{"tags": ["js", "ts", "react"]}?- Option A: Convert to string
"js, ts, react"(simple, but loses structure) - Option B: Expand with indices
tags.0,tags.1,tags.2(preserves structure, but column explosion) - Option C: Flatten only one level, display as
["js", "ts", "react"]JSON string
- Option A: Convert to string
-
Null values: Should
{"field": null}show empty string or “null”?- Usually
""for cleaner tables
- Usually
-
Empty objects:
{}flattens to{}, should be filtered out
Single Object to Table#
If the input isn’t an array but a single object:
{
"name": "Alice",
"age": 25,
"city": "NYC"
}
Two strategies:
Strategy 1: Key-value table (good for property display)
| Field | Value |
|---|---|
| name | Alice |
| age | 25 |
| city | NYC |
Strategy 2: Single-row table (unified format with other data)
| name | age | city |
|---|---|---|
| Alice | 25 | NYC |
Strategy 1 fits config display; Strategy 2 fits data preview. Let users switch:
function transformToTable(data) {
if (Array.isArray(data)) {
return {
type: 'multi-row',
columns: extractColumns(data),
rows: data
};
} else if (typeof data === 'object' && data !== null) {
// Default to key-value mode for single objects
return {
type: 'key-value',
rows: Object.entries(data).map(([key, value]) => ({ field: key, value }))
};
}
}
Performance Optimization for Large Tables#
With large datasets (say, 10,000 rows), direct rendering causes issues:
- DOM explosion: Each cell is a
<td>, 10,000 rows × 10 columns = 100,000 nodes - Memory footprint: Massive React component instances
- Interaction lag: Sorting/filtering traverses all data
Solution: virtual scrolling:
import { useVirtualizer } from '@tanstack/react-virtual';
function Table({ data, columns }) {
const rowVirtualizer = useVirtualizer({
count: data.length,
getScrollElement: () => parentRef.current,
estimateSize: () => 40, // Row height
overscan: 10 // Pre-render 10 rows above/below
});
return (
<div style={{ height: '600px', overflow: 'auto' }} ref={parentRef}>
<table style={{ height: `${rowVirtualizer.getTotalSize()}px` }}>
<tbody>
{rowVirtualizer.getVirtualItems().map(virtualRow => {
const row = data[virtualRow.index];
return (
<tr key={virtualRow.key} style={{ height: `${virtualRow.size}px` }}>
{columns.map(col => <td key={col}>{row[col]}</td>)}
</tr>
);
})}
</tbody>
</table>
</div>
);
}
Only render visible + overscan rows, dynamically replace on scroll—DOM nodes stay stable around 30.
Export Implementation#
Table-to-CSV export is straightforward, but there’s a pitfall: content escaping. If cell content contains commas or newlines, naive concatenation breaks the format:
function exportToCSV(data, columns) {
const rows = data.map(item =>
columns.map(col => {
let value = item[col];
// Handle null/undefined
if (value == null) return '';
// Objects/arrays to JSON string
if (typeof value === 'object') {
value = JSON.stringify(value);
}
// Escape: wrap in quotes if contains comma/newline/quote, double internal quotes
if (/[,\n"]/.test(value)) {
return `"${value.replace(/"/g, '""')}"`;
}
return String(value);
}).join(',')
);
const csv = [columns.join(','), ...rows].join('\n');
// UTF-8 BOM for Excel Chinese character support
const blob = new Blob(['\uFEFF' + csv], { type: 'text/csv;charset=utf-8' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'table.csv';
a.click();
URL.revokeObjectURL(url);
}
\uFEFF is the UTF-8 BOM, telling Excel this is UTF-8 encoded—otherwise Chinese characters become garbled.
Real-World Use Cases#
- API response preview: Backend returns complex JSON, convert to table for quick data distribution view
- Config comparison: Convert package.json dependencies to table, sort/filter to find specific packages
- Data cleaning: Raw JSON to table, instantly see missing fields and anomalies
- Report generation: Business JSON to table, export CSV for data analysis
Summary#
JSON-to-table seems simple, but the details are tricky: column inference must handle inconsistent fields, nested objects need flattening, large data requires virtual scrolling, exports need escaping. Nail these details, and the tool becomes genuinely useful.
Related Tools:
- JSON Formatter - Beautify JSON structure
- JSON to CSV - Export to CSV file
- JSON Tree Editor - Visual JSON editor