OpenAI, the company responsible for the popular ChatGPT AI chatbot, has announced a $1 million fund to award 10 equal grants for experiments in democratic processes related to the governance of AI software. The grants, each amounting to $100,000, will be given to recipients who propose compelling frameworks for addressing issues such as whether AI should criticize public figures and how it should consider the “median individual” globally.
Critics have raised concerns about inherent biases in AI systems like ChatGPT, which can produce outputs containing racist or sexist content. There are growing worries that AI working in conjunction with search engines such as Google and Bing could disseminate inaccurate information convincingly.
OpenAI, which has received $10 billion in backing from Microsoft, has been advocating for AI regulation. However, the company recently threatened to withdraw from the European Union due to proposed regulations, although OpenAI’s CEO, Sam Altman, believes the rules may be amended before implementation.
The grants provided by OpenAI will not cover extensive AI research, as salaries for AI engineers and professionals in the field can range from $100,000 to over $300,000.
OpenAI aims to ensure that AI systems benefit all of humanity and are designed to be as inclusive as possible. The grant program is a first step in that direction, allowing the company to gather insights on AI governance. However, the program’s recommendations will not be binding.
Sam Altman, an advocate for AI regulation, has emphasized the importance of responsible governance while simultaneously releasing updates for ChatGPT and image-generator DALL-E. Altman recently testified before a U.S. Senate subcommittee, expressing concerns about the potential negative impacts of AI.
Microsoft, also endorsing comprehensive AI regulation, has pledged to integrate the technology into its products. The company is competing with OpenAI, Google, and other startups to offer AI solutions to consumers and businesses.
Read More: OpenAI CEO signals potential exit from EU over regulatory concerns
The capacity of AI to improve efficiency and decrease labor expenses has captured the attention of diverse industries. However, there are apprehensions about the spread of misinformation and inaccuracies through AI systems, referred to as “hallucinations” in the industry. AI has already been behind several widely believed hoaxes, including a viral image of an explosion near the Pentagon that briefly impacted the stock market.
Despite calls for increased regulation, Congress has struggled to pass meaningful legislation to rein in Big Tech companies.


