Ever since its introduction, ChatGPT has been the hottest topic. The technology that was released in November 2022 benefits users in numerous ways. However, most countries have started to ban this technology. Regulatory policies for ChatGPT have begun to emerge on a global scale. Even India has ChatGPT banned.
Read this article for more information on the prohibition of chatGPT in India.
The first country that banned ChatGPT was Italy. Italy banned ChatGPT on March 31st, 2023. Authorities justified their ban on ChatGPT with worries about obtaining data and a lack of safeguards preventing children from using the AI chatbot. OpenAI, the firm that launched ChatGPT, has been temporarily but immediately restricted from processing the data of Italian users. The Italian Data Protection Authority also plans to open an investigation into potential GDPR violations by the chatbot.
However, Italy isn’t the only country that banned ChatGPT. There are many other countries, including India, that have put a ban on the use of ChatGPT-style generative AI programs. The Italian Government’s decision to outright ban ChatGPT has prompted other nations to question whether or not similar drastic steps are required to regulate chatbots. And India is one among them.
The NITI Aayog, the planning commission of India, has recently released guiding documents on AI, including National Strategy for Artificial Intelligence and the Responsible AI for All report. These works detail the goals, beliefs, and vision for AI research and development in India with a focus on social and economic inclusion, innovation, and trustworthiness. These documents don’t address critical issues like duty, liability, openness, explainability, and human oversight that are inherent to artificial intelligence, and they aren’t legally binding.
Ever since the launch of AI models, more and more students taking help of these technology, which authorities feel is not right. ChatGPT and other AI writing assistants have been deemed inappropriate for use in academic settings. It would be considered cheating to do so. The use of any artificial intelligence model in academic endeavours is frowned upon by specialists. Users gain nothing from using it. It’s also widely held that the recently released ChatGPT technology can’t be trusted. In addition, it risks misleading its users.
Other countries that are planning to regulate ChatGPT includes the following:
The European Union (EU) is frequently at the forefront of tech legislation. It proposed legislation, the European Artificial Intelligence Act. It indicates the bloc of 27 countries plans to take a cautious approach to AI.
The European AI Act proposes to provide a unified legal and regulatory framework for AI across the European Union, excluding only the defence industry. It will assign different responsibilities and transparency standards to individuals who provide or use various AI tools based on their perceived level of risk, from low to unacceptable. Other legislation, such as the General Data Protection Regulation (GDPR), will complement the AI Act.
The rapid development of AI in 2022 and beyond was not anticipated by lawmakers when the act was enacted, therefore it does not protect AI from being able to create content and art on par with humans.
The United Kingdom came up with a comprehensive strategy for controlling the overhyped technology. The UK government, unlike the EU, is not introducing new legislation but rather is urging industry authorities to apply existing regulations to AI.
Safety, security, and robustness; openness and explainability; fairness; accountability and governance; and contestability and redress are the five principles laid out by the Department of Science, Innovation, and Technology (DSIT) in a white paper released last week. The country does not plan to ban or regulate AI. Instead, the country seems to take things relatively easy.
The USA has several AI regulatory frameworks at the moment. However, a new Trade and Technology Council (TTC) between the United States and the European Union is making strides towards establishing shared ground rules for the regulation of artificial intelligence.
National Institute of Standards and Technology (NIST) promotes AI Risk Management Framework to “improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”
However, businesses that choose not to adopt the framework will face no consequences. The United States, meanwhile, is doing little to restrict the use of ChatGPT within its borders.
In an effort to halt OpenAI’s commercial rollout of GPT-4, an AI research tank lodged a complaint with the FTC last month. The Centre for Artificial Intelligence and Digital Policy (CAIDP) has accused OpenAI of engaging in unfair and deceptive business practices in violation of the FTC Act. OpenAI might face a probe and have its LLMs pulled from commercial use if the complaint is upheld.
These are some of the countries trying to regulate ChatGPT. However, there are certain countries where the use of ChatGPT is not permitted, which include China, Egypt, Iran, North Korea, Russia, Ukraine, and a few others. Users from these countries are not permitted to use ChatGPT due to severe internet restrictions.