AI

Chinese startup Manus challenges ChatGPT in data visualization: which should enterprises use?

Do you want smarter insights into your inbox? Register for our weekly newsletters to get only what is important for Enterprise AI, data and security leaders. Subscribe now


The promise sounds almost too good to be true: Let a messy comma -separated values (CSV) file fall into an AI agent, wait two minutes and receive a polished, interactive graph ready for your next board presentation.

But that’s exactly what Chinese startup Manus.im Delivered with its latest data visualization function, launched this month.

Unfortunately, my first practical tests with corrupt data sets reveal a fundamental business problem: impressive possibilities combined with insufficient transparency about data transformations. While Manus processed messy data better than chatgpt, neither of the tools is still ready for boardroom-ready slides.

The spreadsheet problem that teases Enterprise analyzes too much

Rossums’ Research of 470 financial leaders showed that 58% was still mainly dependent on Excel for monthly KPIs, despite the possession of BI levibilities. Another Tech radar Study estimates that the overall spreadsheet dependence affects around 90% of the organizations of creating a “last miles data problem” between governed warehouses and hasty CSV exports that land in the inboxes of analysts, before critical meetings.

Manus focuses on this exact gap. Upload your CSV, describe what you want in the natural language and the agent automatically cleans the data, selects the correct Vega-Lite grammar and returns a PNG graph that is ready for export-not pivot tables.

Where Manus Chatgpt beats: 4x slower but more accurate with messy data

I have tested both the advanced data analysis of Chatgpt with three data sets (113k row e -commerce orders200k-row marketing judge 10k row Saas MRR), first clean, then damaged with 5% error injection including nulls, dates of mixed sizes and duplicates.

For example, testing the same prompt — "Show me a month-by-month revenue trend for the past year and highlight any unusual spikes or dips" — across clean and corrupted 113k-row e-commerce data revealed some stark differences.
AidData qualityTimeCleans zeroParses datesTreats duplicatesComments
ManusBeautiful1:46N / AN / ACorrect trend, standard presentation, but incorrect numbers
ManusMessy3:53Correct trend despite inaccurate data
ChatgptBeautiful0:57N / AN / AFast but incorrect visualization
ChatgptMessy0:59Incorrect trend of unclean data

For context: Deepseek was only able to handle 1% of the file size, while Claude and Grock each took over 5 minutes but produced interactive cards without PNG export options.

See also  After trying to buy Ilya Sutskever's $32B AI startup, Meta looks to hire its CEO

Exit:

Figure 1-2: Chart outputs of the same entry pending prompt on messy e-commerce data. Manus (below) produces a coherent trend despite data corruption, while chatgpt (above) distorts patterns of unclean date layout shows.

Manus behaves like a cautious junior analyst – Automatically clean up data for mapping, successfully parsing the date of inconsistencies and handling zero without explicit instructions. When I asked for the same income trend analysis about corrupt data, Manus lasted almost 4 minutes, but produced a coherent visualization despite the problems with data quality.

Chatgpt works like a speed coder – Prioritize rapid output above data hygiene. The same request took only 59 seconds, but produced misleading visualisations because it did not automatically clean inconsistencies.

Both tools, however, have failed in terms of ‘executive readiness’. Neither produced Bord-Ready Asschailing or readable labels without follow-up instructions. Data labels were often overlapping or too small, bar charts were missing the right grid lines and number formatting was inconsistent.

The transparency crisis companies cannot ignore

Here Manus becomes problematic for the acceptance of companies: The agent never comes up with the cleaning that it applies. An auditor who looks at the last graph has no way to confirm or have fallen out, allocated or transformed.

When a CFO presents quarterly results based on a graph generated by humans, what happens when someone asks: “How did you treat the double transactions of the Q2 system integration?” The answer is silence.

Chatgpt, Claude and Grok all show their Python code, although transparency by code review is not scalable for business users who have no programming experience. What companies need is a simpler audit path that builds trust.

See also  Google adds a way to create automated workflows to Opal

Warehouse-Native AI Racet Vooruit

While Manus focuses on CSV -Uploads, large platforms build graphs directly into company data infrastructure:

Google’s Gemini in Bigquery was generally available in August 2024, which made it possible to generate SQL querys and inline visualisations on live tables with regard to security at Rhine level.

Microsoft’s Copilot in Fabric reached go in the Power BI experience in May 2024, creating visuals in Fabric Notebooks while working directly with Lakehouse datasets.

Good dates AI AssistantLaunched in June 2025, works in customer environments and respects existing semantic models, so that users can ask questions in ordinary language and receive answers that are in line with pre -defined statistics and business conditions.

These warehouse-native solutions eliminate the export of CSV completely, retain the complete data line and use existing security models for the ulload aids such as Manus sausages to match.

Critical gaps for the acceptance of companies

My tests revealed different blockers:

Live DataTonnectivity Remains absent – Manus only supports file uploads, without snowflake, bigquery or s3 connectors. Manus.im says that connectors are “on the route map” but does not offer a timeline.

Audit trail transparency is completely missing. Enterprise data teams need transformation logs that show exactly how AI has cleaned their data and whether the interpretation of the fields is correct.

Export Flexibility is limited to PNG outputs. Although adequate for fast slide -decks, companies need adaptable, interactive export options.

The verdict: impressive technology, premature for use of use cases

For SMB managers who drown in ad-hoc CSV analysis, Manus’s Drag-and-Drop visualization seems to be doing the work.

See also  The first look: Disrupt 2025 AI Stage revealed

The autonomous data cleaning processes real-world messiness that would otherwise require manual pre-processing, by cutting the turnaround for hours to minutes when you have reasonably complete data.

Moreover, it offers a significant runtime benefit compared to Excel or Google sheets, which require manual pivots and are significant loading times due to local calculation restrictions.

But regulated companies with ruled data lubrications must wait for warehouse-native agents such as Gemini or Fabric Copilot, who keep data in security perceptions and keep track of full origin.

Bottom Line: Manus proves that one-prompt-card work works and processed messy data in an impressive way. But for companies, the question is not whether the graphs look good – it is whether you can bet your career on data transformations that you cannot check or verify. Until AI agents can connect directly to governed tables with rigorous audit paths, Excel will continue to keep his leading role in every three -month presentations.


Source link
Back to top button