This week, most of the tech world’s glitterati gathered in Lisbon for Net Summit, a sprawling convention showcasing every thing from dancing robots to the influencer economic system.
Within the pavilions – warehouse-sized rooms chock stuffed with levels, cubicles and folks networking – the phrase “agentic AI” was in every single place.
There have been AI brokers that hung round your neck in jewelry, software program to construct brokers into your workflows and greater than 20 panel discussions on the subject.
Agentic AI is basically synthetic intelligence that may do particular duties by itself, like ebook your flights or order an Uber or assist a buyer.
It is the trade’s present buzzword and has even crept into the true world, with the Day by day Mail itemizing “agentic” as an ‘in’ phrase for Gen Z final week.
However AI brokers aren’t new. In actual fact, Babak Hodjat, now chief AI officer at Cognizant, invented the know-how behind some of the well-known AI brokers, Siri, within the Nineties.
“Again then, the truth that Siri itself was multi-agentic was a element that we did not even discuss – nevertheless it was,” he instructed Sky Information from Lisbon.
“Traditionally, the primary individual that talked about one thing like an agent was Alan Turing.”
New or not, AI brokers are thought to return with much more dangers than general-purpose AI, as a result of they work together with and modify real-world situations.
The dangers that include AI, like bias in its information or unexpected circumstances in the way it interacts with people, are magnified by agentic AI as a result of it interacts with the world by itself.
“Agentic AI introduces new dangers and challenges,” wrote the IBM Accountable Expertise Board of their 2025 report on the know-how.
“For instance, one new rising threat entails information bias: an AI agent would possibly modify a dataset or database in a approach that introduces bias.
“Right here, the AI agent takes an motion that doubtlessly impacts the world and might be irreversible if the launched bias scales undetected.”
However for Mr Hodjat, it is not AI brokers we have to fear about.
“Individuals are over-trusting [AI] and taking their responses on face worth with out digging in and ensuring that it is not just a few hallucination that is arising.
“It’s incumbent upon all of us to study what the boundaries are, the artwork of the doable, the place we will belief these methods and the place we can’t, and educate not simply ourselves, but additionally our youngsters.”
His warning will really feel acquainted, notably in Europe, the place there’s an elevated wariness round AI in comparison with the US.
However have we turn into too cautious relating to AI – on the threat of a much more existential menace sooner or later?
Jarek Kutylowski, chief government of German AI language large DeepL, actually thinks so.
This yr, the EU AI Act got here into power, strict laws about how corporations can and may’t use AI.
Within the UK, corporations are ruled by present laws like GDPR and there is uncertainty about how strict our guidelines might be sooner or later.
When requested if we wanted to decelerate AI innovation with the intention to put stricter laws in place, Mr Kutylowski stated it was a query value grappling with… however in Europe, we’re taking it too far.
Learn extra from science and know-how:
NASA cancels space launch
Jeff Bezos’s rocket lands on Earth
New law could help tackle AI-generated child abuse at source
“Wanting on the obvious dangers is simple, wanting on the dangers like what are we going to overlook out on if we do not have the know-how, if we’re not profitable sufficient in adopting that know-how, that’s most likely the larger threat,” stated Mr Kutylowski.
“I see undoubtedly a a lot bigger threat in Europe being left behind within the AI race.
“You will not see it till we begin falling behind and till our economies can’t capitalise on these productiveness good points that possibly different elements of the world will see.
“I don’t consider personally that technological progress might be stopped in any approach, so it’s extra of a query of ‘how can we pragmatically embrace what’s coming forward?”















