Intech Solutions

Part 1: Powering AI with High-Quality Data

Powering AI with High-Quality Data

By Toby Walsh, Chief Scientist at UNSW.ai, the AI Institute of UNSW Sydney  (29/12/2026).

It’s a curious thing to realise that this weekend marks the third birthday of ChatGPT.

Three years isn’t a long time in the grand sweep of technological history, yet it feels as though we’ve lived through a decade of change in that short span.

I remember the moment it launched. Not because the technology itself surprised me, but because of how suddenly the world seemed to wake up to artificial intelligence. For many people, that was the day AI stopped being an abstract idea and became something tangible, useful, and undeniably present in their daily lives.

Of course, chatbots are only one small corner of the AI universe. But they were the spark that made the broader public sit up and say, “Ah, this is real now.” And in truth, the reason these systems became so capable, so quickly, comes down to something far less glamorous than algorithms or neural networks. It comes down to data.

I’ve spent my career building AI systems, and if there’s one lesson that has remained constant, it’s this: the success of almost every AI project – be it a chatbot, a medical diagnostic tool, or a predictive model for business – depends overwhelmingly on the quality and quantity of the data used to train it.

Eighty percent of the work in any AI initiative is not the modelling, not the coding, not the deployment. It’s the data. Collecting it. Cleaning it. Understanding it. Ensuring it reflects the world accurately enough that the system built on top of it behaves responsibly.

The journey toward using AI effectively – whether in your business, your organisation, or your personal life – always begins with understanding the data you’re feeding into these systems.

The Technology Isn’t the Surprise – the Speed Is

A week ago, I was speaking with my father, who had just celebrated his 91st birthday. He reminded me that when he went to university, he thought my interest in artificial intelligence was rather eccentric. “It seemed a bit odd back then,” he said. “But now I see you were playing the long game.”

He’s right. The fundamental ideas behind today’s AI systems are not radically different from what researchers imagined decades ago. What has shocked even those of us in the field is not the technology itself, but the speed and scale at which it has arrived.

We are living through what is, by many measures, the largest technological gold rush in human history. More than a billion dollars is being poured into AI every single day. That’s over 20 percent of the world’s entire research and development budget concentrated on one technology. We’ve never seen anything like it.

This tidal wave of investment is why AI seems to be everywhere at once. Every morning, you can open the newspaper and find multiple stories about new breakthroughs, new applications – and new quandaries. The pace is relentless. And that pace is both an opportunity and a challenge.

Australia’s Unease and Why It Matters

Here in Australia, we face a particular tension. Surveys consistently show that Australians are the most concerned of all G20 nations about the impact of AI on their lives. When you dig into those concerns, a clear theme emerges: employment.

People worry about what AI means for their jobs, their livelihoods, their sense of security. And those concerns are not unfounded. AI will reshape the labour market. It will automate some tasks, augment others, and create entirely new categories of work. But the transition will not be painless, and we must be honest about that.

Beyond employment, Australians also express deep concerns about data quality, errors, and bias. And they’re right to. Data is not neutral. It reflects the world in which it was collected with all its inequalities, blind spots, and historical injustices.

Take something as seemingly straightforward as a heat map of crime statistics. On the surface, it’s just data. But dig deeper and you see the complexities. If more police are sent into poorer neighbourhoods because we expect to find more crime there, then of course more crime will be recorded there. The data becomes a mirror of our assumptions, not an objective truth. And if we feed that data into an AI system – say, to guide police patrols or set insurance premiums – we risk reinforcing the very biases we should be working to dismantle.

This is why responsible AI isn’t just a technical challenge. It’s a societal one.

AI Has Arrived, Whether We Like It or Not

One of the striking things about the past few years is how seamlessly AI has woven itself into our digital environments. Whether you use Microsoft tools, Google’s ecosystem, Salesforce, or any number of other platforms, you’ve probably noticed a little AI assistant quietly appearing in the corner of your screen.

You open your email, and it offers to finish your sentence. You open a spreadsheet, and it suggests a formula. You start writing a document, and it proposes a structure. These tools have arrived almost without invitation, and they’re already reshaping how we work.

But these general-purpose assistants are only the beginning. AI is also moving into highly specialised domains, from medicine and logistics, to agriculture, finance, and manufacturing. In many cases, it’s already outperforming humans at narrow, well-defined tasks. And in many more cases, it’s becoming a powerful collaborator.

Yet in every one of these applications, the same truth holds: the system is only as good as the data behind it.

The Road Ahead

As we stand three years on from the moment AI truly entered the public consciousness, I find myself both optimistic and cautious. The opportunities are immense and the challenges are real – but the pace of change leaves us little time to adapt.

However, we are not powerless. We can choose to build AI systems that are transparent, fair, and accountable. We can choose to invest in digital literacy so that people understand the tools shaping their world. We can choose to design policies that protect workers, safeguard privacy, and ensure that the benefits of AI are shared widely.

And above all, we can choose to treat data – the foundation of all AI – not as an afterthought, but as the critical infrastructure it is.

Artificial intelligence is many things –  a tool, a technology, a catalyst. But at its heart, it is a reflection of humanity – our collective values, our assumptions, our aspirations. If we want AI to serve us well, we must begin by understanding the data that shapes it.

That is the work ahead. And it’s work worth doing.

 

This article is based on a presentation delivered by Toby Walsh at the Powering AI with High-Quality Data webinar, hosted by Intech Solutions on 29 November 2025. 

Information Request
Name

What is 7+8?