By emphasising the need for accountability and transparency, Britain established rules aimed at preventing generative AI models like ChatGPT from being dominated by a small number of significant digital corporations to the harm of consumers and enterprises.
Similar to other governments across the world, the Competition and Markets Authority (CMA) of Britain is working to curb some of the potential drawbacks of AI without inhibiting innovation.
The CMA’s chief executive, Sarah Cardell, stated on Monday that there was a real possibility for technology to boost productivity and simplify millions of daily chores, but that a bright future could not be taken for granted.
She warned that there was a chance that a small number of businesses with market power could monopolise the usage of AI, preventing the economy as a whole from reaping its full potential benefits.
Because of this, she said, “we have today proposed these new principles and launched a wide-ranging engagement program to help ensure the development and use of foundation models evolve in a way that promotes competition and protects consumers.”
The CMA’s draft guidelines, which arrive six weeks before Britain holds a summit on global AI safety, will serve as the foundation for its AI policy as it obtains new authority to regulate digital markets in the coming months.
The company declared that it would now solicit opinions from top AI developers like Google, Meta, OpenAI, Microsoft, NVIDIA, and Anthropic as well as from academics, governments, and other regulators.
To coordinate efforts for AI development in Africa and hasten achievement of the UN Sustainable Goals, The Foreign Secretary will make an appeal to international partners. Leading the way, the UK will begin ‘’AI for development’’ program in collaboration with Canada’s International Development Research Centre, with the goal of assisting developing nations—initially those in Africa—in growing their domestic AI expertise and fostering innovation.
In addition, the UK today announced a £1 million investment in a groundbreaking fund that will use artificial intelligence (AI) to assist nations and international organisations in averting crises before they arise. This fund will work with international partners. When calamities arise, the Complex Risk Analytics Fund (‘CRAF’d’) will also assist nations in regaining stability and moving toward sustainable development.
In March, Britain decided against creating a new regulator in favour of dividing control over AI between the CMA and other organisations that monitor human rights and health and safety.
The United States is considering potential regulations for artificial intelligence, and in April, digital ministers from the Group of Seven major countries decided to establish “risk-based” legislation that would also protect an open market.