The Reuters news staff had no role in the production of this content. It was created by Reuters Plus, the brand marketing studio of Reuters.
Produced by Reuters Plus in partnership with Shell
For all the excitement and innovation occurring around generative AI, the current media frenzy has raised as many concerns as it has inspired breakthrough new thinking.
There is little dispute about the potential for society to use this technology to increase productivity, free humans up to do creative thinking and level the playing field for communities across the world—with one caveat, that we simply must build it responsibly.
Before we allow ourselves to get swept up in the sensational headlines about bots stealing professional jobs or entire industries being decimated at the hands of generative AI, the imperative is to look at the guardrails we have in place. This will be the true key to driving trust, encouraging adoption, and building on a steady foundation that lets users truly harness the opportunity. As professionals, this must be the biggest investment we make.
Charting the Course for the Future of Generative AI
NOVEMBER 18, 2022 9:29AM
SPONSORED CONTENT
To put it simply, the generative AI revolution needs to get serious about transparency.
The stakes are high. There are already countless examples of AI hallucinations, which occur when an AI algorithm reviews a corpus of text and starts to make predictions based on patterns it sees in that text, but fails to incorporate contextual information into that pattern recognition process. Add the fact that many publicly available sources of information used to train the technology, like Wikipedia and Reddit, are themselves prone to bias and manipulation, and it becomes clear how AI can often get it wrong.
A Mandate for Transparency
“The AI saw the pattern, but it failed to sweat the details.
In the real world, outside of our labs – that could have had serious consequences”
At Thomson Reuters, we encountered this phenomenon early on in our tests of generative AI’s ability to understand and interpret case law. When asked a question about a specific state law in Michigan, the AI could confidently and accurately identify the law and provide details about it.
However, when we asked about that same law in Massachusetts, it confidently gave us the same answer, subbing in Massachusetts for Michigan. There was only one problem: that law doesn’t exist in Massachusetts. The AI saw the pattern, but it failed to sweat the details. In the real world, outside of our labs – that could have had serious consequences if the AI was taken at face value.
The example was a moment of clarity for our team of software engineers and subject matter experts because it illustrated just how important trusted, proprietary data, deep subject matter expertise and ability to cite authoritative sources would be to the future of generative AI in real-world professional applications. After all, it’s one thing when a generative AI tool flubs the punchline on a knock-knock joke or drafts a letter full of cliches; it is another when its output has an impact on truth, transparency and justice.
At TR we recently announced our commitment to invest $100 million a year furthering development of generative AI into a wide range of products, including the leading software solutions used by legal and tax professionals to research case law, draft contracts and manage tax compliance globally. The biggest investment we will make is ensuring our AI is responsibly built. That’s why we updated and published our AI Principles to demonstrate our commitment to ethical AI.
We also need to ask ourselves tough questions - whether generative AI really is the best technology for each use case, exploring its limitations when we encounter them. Most of all, we need to share our findings, collaborate with industry partners, clients, and regulators, and stay cognizant of both the huge potential and the associated risks as we move forward through this exciting period of innovation.
Shell Emerging Energy Solutions
By Steve Hasker
President & CEO,
Thomson Reuters
More from Shell
Lorem Ipsum Article Title 4
Lorem Ipsum Article Title 3
Lorem Ipsum Article Title 2
Lorem Ipsum Article Title 1
Disclaimer: The Reuters news staff had no role in the production of this content. It was created by Reuters Plus, the brand marketing studio of Reuters. To work with Reuters Plus, contact us here.
Before they can be fully trusted to do important work, professional grade large language models need to be trained using comprehensive, authoritative data sets. Perhaps even more importantly, that process needs to be intermediated by human subject matter experts who understand the nuances and the context and have the power to override inaccuracies.
And, even after those steps are taken, any output from the model must also include a clear audit trail of where the results came from, with traceable links back to source materials. Put simply, Humans + AI is the only way we will be successful.
Building a Strong Foundation
“While we may not have all the answers now, we can start putting some fundamental guardrails in place, in the form of regulation, to help drive trust and innovation in AI”
This is not just an industry issue, but a societal imperative, and not one we can solve alone. Before we can trust computer-generated guidance, ethical or otherwise, people need to know how an AI system arrives at its conclusions and recommendations and feel confident that the results are explainable.
While we may not have all the answers now, we can start putting some fundamental guardrails in place, in the form of regulation, to help drive trust and innovation in AI. We need to get comfortable with the concept of regulating based on what we know now – and being prepared to course correct as we go.
It is an enormously exciting time at the intersection of technology and professional information. We are on the verge of a revolution in creating new efficiencies by getting professionals the right answers faster. We will see the development of new ways of collaborating and new models of creative problem solving that haven’t even been envisioned yet. But before we can make any of those a reality, we need to do the critical foundational work that ensures everyone can trust the results.
Building a Strong Foundation
At TR we recently announced our commitment to invest $100 million a year furthering development of generative AI into a wide range of products, including the leading software solutions used by legal and tax professionals to research case law, draft contracts and manage tax compliance globally. The biggest investment we will make is ensuring our AI is responsibly built. That’s why we updated and published our AI Principles to demonstrate our commitment to ethical AI.
We also need to ask ourselves tough questions - whether generative AI really is the best technology for each use case, exploring its limitations when we encounter them. Most of all, we need to share our findings, collaborate with industry partners, clients, and regulators, and stay cognizant of both the huge potential and the associated risks as we move forward through this exciting period of innovation.
To read more about new technologies impacting the way the world works, visit AI@ Thomson Reuters