top of page
  • mia-hem

Should we be concerned about Generative Artificial Intelligence models?

By Mia Hem


Generative Artificial Intelligence (AI) models are being developed at an unprecedented speed, and are known for their impressive abilities to learn and generate intelligent outputs. Chat Gpt by OpenAI, for example, is one of the most popular models used within and outside the industry. Some of the most common ways in which using Generative AI can assist in daily life is adding to creativity, efficiency, customization, and personalization, and it has an enormous potential scalability. However, as the development of these models continues, concerns are being raised that their advancements are happening too quickly, and the tech companies that develop them do not adequately consider the potential (risky) consequences it may lead to.





An example is Large Language Models (LLMs) such as gpt-4 that are powering generative AI models like Chat Gpt. These models have shown us that they have the ability to excel in law and medical entrance exams and generate various types of content. Furthermore, the recent success of LLMs is due to the combination of massive amounts of data, algorithms capable of learning from it, and its computational power. However, the continued improvement of LLMs could be limited by the availability of training data, the costs of inputs like electricity and skilled labor, and legal issues such as copyright violations. While the emergent abilities of LLMs are exciting, their development must be regulated to address concerns such as social bias and copyright issues. Indeed, there is growing concern that generative AI models like LLMs are being developed too quickly, and that regulations are needed to avoid potentially harmful behaviors that may impact social communities and individuals. Despite these concerns, the power of LLMs has already led to innovative new applications in fields like medicine and law, but the potential for catastrophic risks requires careful consideration and control.


The input that trains LLMs has played a significant role in advancing the development of AI models. However, as more people gain access to the technology to develop these models, there is an increasing need for regulation to ensure that AI is developed responsibly and ethically. This is especially important given the potential social and economic impact of AI on society. There are a number of concerns that need to be addressed to ensure that the development of AI is done in the right way. This regulation should keep in focus:

  1. transparency of the way in which the AI models are trained, so that we can more easily identify and correct errors and biases.

  2. establishing standards to define who is accountable for AI development and the consequences of AI systems.

  3. providing oversight in the development of AI systems, so that regulation can help to ensure a more equitable and fair distribution of the benefits of these systems.

  4. effective data protection measures and privacy policies are necessary to ensure that these systems comply with privacy standards and minimize potential harm.


In conclusion, the development of generative AI models such as LLMs is an exciting area of research, but it must be done in a responsible way. There is a pressing need for regulation to address the concerns of these AI models. Governments, international organizations, industry stakeholders, and the broader public must work together to establish and enforce these regulations. As AI continues to be developed, we must be mindful of its potential impact on society and ensure that it is developed in a way that benefits everyone.



9 visualizzazioni0 commenti

Post recenti

Mostra tutti
bottom of page