Labour shouldn’t treat all AI firms the same
Zahra Bahrololoumi, CEO of U.K. and Ireland at Salesforce, speaking during the company’s annual Dreamforce conference in San Francisco, California, on Sept. 17, 2024.
David Paul Morris | Bloomberg | Getty Images
LONDON — The UK chief executive of Salesforce wants the Labor government to regulate artificial intelligence — but says it’s important that policymakers don’t tar all technology companies developing AI systems with the same brush.
Speaking to CNBC in London, Zahra Bahrololoumi, CEO of UK and Ireland at Salesforce, said the American enterprise software giant takes all legislation “seriously.” However, she added that any British proposals aimed at regulating AI should be “proportional and tailored.”
Bahrololoumi noted that there’s a difference between companies developing consumer-facing AI tools — like OpenAI — and firms like Salesforce making enterprise AI systems. She said consumer-facing AI systems, such as ChatGPT , face fewer restrictions than enterprise-grade products, which have to meet higher privacy standards and comply with corporate guidelines.
“What we look for is targeted, proportional, and tailored legislation,” Bahrololoumi told CNBC on Wednesday.
“There’s definitely a difference between those organizations that are operating with consumer facing technology and consumer tech, and those that are enterprise tech. And we each have different roles in the ecosystem, [but] we’re a B2B organization,” she said.
A spokesperson for the UK’s Department of Science, Innovation and Technology (DSIT) said that planned AI rules would be “highly targeted to the handful of companies developing the most powerful AI models,” rather than applying “blanket rules on the use of AI. “
That indicates that the rules might not apply to companies like Salesforce, which don’t make their own foundational models like OpenAI.
“We recognize the power of AI to kickstart growth and improve productivity and are absolutely committed to supporting the development of our AI sector, particularly as we speed up the adoption of the technology across our economy,” the DSIT spokesperson added.
Data security
Salesforce has been heavily touting the ethics and safety considerations embedded in its Agentforce AI technology platform, which allows enterprise organizations to spin up their own AI “agents” — essentially, autonomous digital workers that carry out tasks for different functions, like sales, service or marketing.
For example, one feature called “zero retention” means no customer data can ever be stored outside of Salesforce. As a result, generative AI prompts and outputs aren’t stored in Salesforce’s large language models — the programs that form the bedrock of today’s genAI chatbots, like ChatGPT.
With consumer AI chatbots like ChatGPT, Anthropic’s Claude or Meta’s AI assistant, it’s unclear what data is being used to train them or where that data gets stored, according to Bahrololoumi.
“To train these models you need so much data,” she told CNBC. “And so, with…
Read More: Labour shouldn’t treat all AI firms the same