3 Comments
User's avatar
Justin Nixon's avatar

The 540-respondent survey is what makes this valuable. Most “data trust” conversations are anecdotal. This puts numbers behind what everyone already feels. The finding that lands hardest for me is the gap between “we have data quality tools” and “our stakeholders actually trust the metrics.” Those are two different problems. One is infrastructure. The other is confidence. Most teams solve the first and assume the second follows. It doesn’t. What’s missing is a layer that translates pipeline health into something a CFO can act on in 30 seconds. Not “all checks passed” but “this number is trustworthy, here’s why, and here’s who else is using it.” The report validates that the gap between data quality and decision confidence is real. The question is who builds the bridge.

Inga Simonenko's avatar

This frames AI readiness correctly: machines do not inherit tribal knowledge. If definitions, context, and trust are not encoded in the data layer, automation does not scale. Semantic layer beats shiny tools every time.

Peter Andrew Nolan's avatar

Hi Saurabh,

Yes, this is widely known and well known. Getting data out of "large operational systems" like ERPs and getting it into a state an AI can read it is very similar to creating a data warehouse. You have to do the ETL.

There is a new invention you might want to know about. It is now possible to create pseudo-dimensional views over the top of large operational systems. This reduces the possibilities of lossy joins and cartesian products.

It's not as good as a data warehouse, but it's much cheaper. These sorts of views will make it easier to feed AIs for training. It also makes it possible to query data from large operational systems via query tools.

The post has a full example with Microsofts Business Central which has 200K+ installed accounts.

Enjoy!

https://bida.ro/2025/12/30/bida0060-dimensional-models-over-business-central/