LIVE NEWS
  • At least six killed in Kyiv as gunman opens fire and takes hostages
  • What Is Q-Day? The Quantum Threat to Bitcoin Explained
  • Tycoon 2FA Loses Phishing Kit Crown Amid Surge in Attacks
  • My Friend Was 40 Years Older Than Me. She Changed How I See Life.
  • ‘No regrets’: Venezuela’s Machado defends giving Nobel medal to Trump | Donald Trump News
  • Stocks Soar on Middle East Peace Prospects
  • Air Force unit executes test of Anduril’s semiautonomous combat drone
  • 700-year-old mummy from Bolivia contains earliest confirmed evidence of strep throat bacteria in the Americas
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • See More
    • Artificial Intelligence
    • Climate Risks
    • Defense
    • Healthcare Innovation
    • Science
    • Technology
    • World
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • Artificial Intelligence
  • Climate Risks
  • Defense
  • Healthcare Innovation
  • Science
  • Technology
  • World
Home»Technology»4 tips for building better AI agents that your business can trust
Technology

4 tips for building better AI agents that your business can trust

primereportsBy primereportsMarch 21, 2026No Comments7 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
4 tips for building better AI agents that your business can trust
Share
Facebook Twitter LinkedIn Pinterest Email


4 tips for building better AI agents that your business can trust

Ekaterina Demidova/Moment via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Companies are exploring AI agents in multiple ways.
  • Professionals must consider how to exploit these technologies.
  • Measurement, collaboration, and experimentation are key.

AI agents will impact every professional role. If your company hasn’t started using agents yet, it will soon, either through off-the-shelf software products or in-house tools that draw on large language models and data sources.

Professionals exploring how to use agents in their roles are well-advised to seek best-practice guidance. One such source of information is Joel Hron, CTO at Thomson Reuters Labs, who is helping the information services company exploit generative AI, machine learning, and agentic technologies.

Also: Worried AI agents will replace you? 5 ways you can turn anxiety into action at work

Hron told ZDNET that Thomson Reuters uses a mix of in-house models and off-the-shelf tools to power its AI innovations. As well as advances in frontier labs from Big Tech firms, Hron and his team ensure the firm exploits its proprietary knowledge and assets.

“If you look at the core of what we do well, it’s being able to synthesize human expertise and information into judgment that can be served back to professionals,” he said. 

“The delivery mechanism for how that expertise is delivered is evolving right now. Traditionally, it’s been delivered via software. But it’s increasingly delivered via agents, or agents plus software.”

Hron points to several key agentic achievements at Thomson Reuters, including the AI-powered legal research tool Westlaw Advantage and the firm’s Deep Research agent that reviews insights and strategizes as a researcher would.

Also: AI agents are fast, loose, and out of control, MIT study finds

From these explorations, Hron said he’s learned four key lessons that professionals can use to build trustworthy agentic AI systems.

1. Measure your success

Hron said the first area to focus on is evaluations: “You need to know what good looks like.”

While this focus on evaluations sounds like an obvious requirement, Hron said it’s a hard process to get right, to quantify, and to systematize.

“We’ve said that for the last three years that this is one of the most important things for building good AI systems, and it continues to be true today in an era of agents,” he said.

joel-hron-cto-headshot

Hron: “We still want the confidence of our human experts.”

Thomson Reuters

Hron’s team tracks and measures agentic success in several ways. First, they leverage public benchmarks, which he said provide good early indicators of the positive potential performance of new models.

Also: 5 security tactics your business can’t get wrong in the age of AI – and why they’re critical

Second, they’ve developed their own internal benchmarks with strong directions for automated evaluations: “Rather than just saying, ‘How close is the generated answer to a good answer?’, our process is about really defining, ‘Well, what makes the answer good?'”

Finally, Thomas Reuters keeps humans in the loop, ensuring evaluations go a step beyond automated assessments.

“Automated evaluations help drive the flywheel faster for our development teams, and they can test a lot of ideas relatively quickly, and that’s good. But before we ship, we still want the confidence of our human experts and their assessment of the performance,” he said.

“The continued reliance on that approach has allowed us to ship great products that perform well in the market. I think human input is a critical ingredient to us being able to do that work well and do it with confidence.”

2. Make experts sit together

Hron advised professionals to understand deeply what agents do and how they operate over time.

“Tightly coupling that awareness to the user experience is increasingly important,” he said. “If you think about these agentic systems like human AI collaborators, then the human and the agent need a common language and a common interface that they work on.”

Also: Why enterprise AI agents could become the ultimate insider threat

Hron said this common language and interface should give humans valuable insight into agentic thought processes and vice versa.

“This area is a new and important UI experience, and I think tightly coupling deep technical understanding of the agent with a good user experience is critical.”

While many experts talk about the importance of human/agent coupling, Hron said the key to success is straightforward: bringing teams in the business together.

“This process isn’t scientific — it’s about forcing my designers to sit with data scientists and talk about what’s happening,” he said. “The closer we can make those two sets of people, and the more often they can sit together, the better you have the osmosis of thinking across those two areas.”

3. Develop proven capabilities

Despite any hype that might have you believe otherwise, Hron said professionals must recognize that agents and the models that power them are far from omniscient.

Hron said AI models are improving across three dimensions: writing code, executing plans, and multi-step reasoning. The latest advances allow model capabilities to be extended by other software tools.

“What that development means for us as a company is more positive than negative, because it means that, if we can take all of these hundreds of applications that we’ve sold into the market for many decades, and we can decompose them, then we have proven capabilities for professionals,” he said.

Also: 90% of AI projects fail – here are 3 ways to ensure yours doesn’t

“If we can decompose these elements as tools for the agent, then we’re actually extending the capabilities of these models quite a lot, and that’s really the future of agents.”

Rather than seeing agentic AI as an omniscient model that attempts to do everything under the sun, Hron advised professionals to give agents access to proven capabilities people already use, which is a focus of his team.

“We’re looking at our systems and asking ourselves, ‘OK, we’ve built this for a human user for many, many years. Now, what ergonomics are required for an agent to work with this system? How do you adapt the process to be conducive to working with an agent, versus necessarily a human in all cases? And what does that approach mean for how the tool looks, feels, and performs?'”

4. Look beyond the firewall

Thomson Reuters Labs recently launched the Trust in AI Alliance, a builder-led forum for senior AI researchers from Anthropic, AWS, Google Cloud, OpenAI, and Thomson Reuters to discuss how trust is engineered into agentic systems. 

Hron said the Alliance, which shares lessons publicly to inform the broader industry conversation around trustworthy AI, also helps senior members of his team to learn best practices from industry pioneers.

“We’re trying to bring forward a focus for explainability and transparency in terms of how these models operate,” he said.

Also: 5 ways you can stop testing AI and start scaling it responsibly

Hron said the technology pioneers and their models have significantly reduced the time and effort required to get from zero accuracy to 90%.

“But we’re not in the 90% game,” he said. “We’re in the 99% and 99.9% game, and we must consider how we get that extra nine or two nines of accuracy, which is the difference for trust.”

As part of this process, Thomson Reuters is also working with academic institutions. Late last year, the company announced a five-year partnership to create a joint Frontier AI Research Lab at Imperial College London. 

“In these initiatives, we’re focused on those last two nines of accuracy, because that’s what people look to buy from us for when we release our products to market,” said Hron.

“The frontier technology organizations will continue to push the limits on what’s possible. But for us, the margin is where the competitive edge in the world of law, tax, and compliance is won and lost. And so that’s what we really need to get right.”



Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleCzechs plan major rally as MPs mull ‘foreign agent’ law
Next Article Scientists turn probiotic bacteria into tumor-hunting cancer killers
primereports
  • Website

Related Posts

Technology

OpenAI’s former Sora boss is leaving

April 18, 2026
Technology

Sam Altman’s project World looks to scale its human verification empire. First stop: Tinder.

April 18, 2026
Technology

My Raspberry Pi NAS taught me that cheap storage isn’t worth the compromise

April 18, 2026
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Global Resources Outlook 2024 | UNEP

December 6, 20258 Views

The D Brief: DHS shutdown likely; US troops leave al-Tanf; CNO’s plea to industry; Crowded robot-boat market; And a bit more.

February 14, 20264 Views

German Chancellor Merz faces difficult mission to Israel – DW – 12/06/2025

December 6, 20254 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

PrimeReports.org
Independent global news, analysis & insights.

PrimeReports.org brings you in-depth coverage of geopolitics, markets, technology and risk – with context that helps you understand what really matters.

Editorially independent · Opinions are those of the authors and not investment advice.
Facebook X (Twitter) LinkedIn YouTube
Key Sections
  • World
  • Geopolitics
  • Popular Now
  • Artificial Intelligence
  • Cybersecurity
  • Crypto
All Categories
  • Artificial Intelligence
  • Climate Risks
  • Crypto
  • Cybersecurity
  • Defense
  • Economy
  • Geopolitics
  • Global Markets
  • Healthcare Innovation
  • Politics
  • Popular Now
  • Science
  • Technology
  • World
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions
  • Disclaimer
  • Cookie Policy
  • DMCA / Copyright Notice
  • Editorial Policy

Sign up for Prime Reports Briefing – essential stories and analysis in your inbox.

By subscribing you agree to our Privacy Policy. You can opt out anytime.
Latest Stories
  • At least six killed in Kyiv as gunman opens fire and takes hostages
  • What Is Q-Day? The Quantum Threat to Bitcoin Explained
  • Tycoon 2FA Loses Phishing Kit Crown Amid Surge in Attacks
© 2026 PrimeReports.org. All rights reserved.
Privacy Terms Contact

Type above and press Enter to search. Press Esc to cancel.