Home » Serbia Adopts Ethics Guidelines for Artificial Intelligence
Featured Global News News Technology World News

Serbia Adopts Ethics Guidelines for Artificial Intelligence

On 23 March 2023, the Serbian government adopted Ethics Guidelines for the Development, Implementation and Use of Reliable and Responsible AI (“Guidelines“), which may be seen as yet another step in the process of harmonising Serbia’s legislative framework with the European Union, following the Proposal for an AI Regulation announced by the EU Commission two years ago. The Guidelines largely rely on UNESCO’s Recommendation on the Ethics of AI adopted in 2021, which Serbian representatives also helped create. Since the EU is awaiting its regulatory framework on AI, Serbia took the first step down this road as well.

Purpose

In adopting the Guidelines the main goal is to prevent AI systems from endangering or marginalising people and their actions, and to ensure that the freedom of action, opinion and decision-making is not violated so as to render the rights and assets that preserve those values meaningless, diminished or forgotten. As stated in the Guidelines, the use of AI should serve to improve human productivity, optimise work resources and improve quality of life.

The Guidelines set the main principles and conditions for reliable and responsible AI systems, followed by a self-assessment questionnaire filled out by the developer or user of the AI systems and recommendations for improvement in accordance with the main principles and conditions set in the Guidelines. In addition, the Guidelines identify which AI systems may be considered high-risk.

General principles and conditions for reliable and responsible AI

The Guidelines set out general principles and conditions for the creation of reliable and responsible AI that all individuals and legal entities developing, applying or using AI systems should observe.

These general principles (“General Principles“) are: (i) explainability and verifiability, which emphasises the transparency of the AI system that could be checked throughout its life cycle; (ii) dignity, which means that the AI system cannot in any way lead to the subordination of humans to the functions of the system; (iii) “do not harm” principle, which means that the AI system must be safe and must contain mechanisms for avoiding damage to people and their property, as must not be used for malicious purposes; and (iv) fairness, which protects the rights and integrity of people, particularly sensitive categories (e.g. persons with disabilities).

The conditions for the creation of reliable and responsible AI are based on the above General Principles and are: (i) action (mediation, control, participation) and supervision; (ii) technical reliability and safety; (iii) privacy, data protection and data management; (iv) transparency; (v) diversity, non-discrimination and equality; (vi) social and environmental well-being; and (vii) responsibility.

Questionnaire and recommendations

The Guidelines set out for each of the conditions above (i) a self-assessment questionnaire (“Questionnaire“) and (ii) recommendations for complying with the stated conditions.

The Questionnaire is designed to assist individuals or legal entities that develop, market, acquire, apply and/or use AI systems to assess their compliance with the stated conditions. It is recommended that the Questionnaire be filled out in the earliest stages of creating an AI system, i.e. in the planning phase. By filling out the Questionnaire, the developers may identify areas for improvement and receive insight into already established measures.

Each of the conditions contains a list of recommendations that the subject must implement in order to achieve reliable and responsible AI systems.

High-risk identified AI systems

The Guidelines also identify high-risk AI systems that should be analysed and evaluated separately due to their importance and potential to influence people and their integrity. These include, for example, AI systems in the field of health (particularly systems analysing genetic and health data) and AI systems for the management of critical infrastructure (particularly systems that manage road traffic and the supply of water, gas, heating and electricity).

Conclusion

The Guidelines have been adopted to provide a framework and guide the work of all participants within the AI ecosystem. As an AI legal framework is only just starting to take shape in the EU, these Guidelines will enable further development in this ever-expanding area. Guided by the principles set forth in the Guidelines, AI should be used for the benefit of entire communities, and AI systems should serve to maintain and nurture democratic processes and respect the plurality of values and life choices of individuals. The Guidelines provide a basis for a wider implementation of AI in decisions that will shape social changes, increase knowledge and promote economic progress.

Source : Lexology

Translate