





Our organization has defined guidelines for maintaining the AI literacy and responsible AI awareness of our staff. These may include the following:
Training should focus on the most relevant AI aspects for each job role and include regularly the fundamentals concerning all employees:






The organisation should list its AI systems and ensure transparency obligations under relevant laws are met. These obligations should be documented. The results of this process should be maintained as a reference for compliance activities.






The organisation should implement a process to ensure that synthetic content, such as audio, images, video, or text generated by its AI systems, is marked as artificially generated or manipulated. This marking must be in a machine-readable format. The chosen technical solution should be effective, interoperable, robust and reliable. It should align with current industry standards where feasible and cost-effective to do so.






The organisation should ensure that when a person interacts with an AI system, they are clearly informed of this fact. This notification must be provided at the very beginning of the interaction in a way that is easy to notice, understand, and meets applicable accessibility standards. For example, a chatbot should introduce itself as an AI assistant at the start of the conversation.
A structured internal record should be maintained which defines the scope and methodology of the notification. It should include at least the following:






The organization should ensure that natural persons interacting directly with an AI system are informed in a clear and distinguishable manner that they are interacting with an AI system. This notification must be provided at the latest at the time of their first interaction or exposure to the AI system. The method of providing this information must conform to applicable accessibility requirements and be perceivable by individuals with disabilities.






Organisations deploying AI systems should establish and maintain a documented process for the identification and disclosure of AI generated or AI manipulated content in accordance with Article 50.4 of the AI Act.
The process should define disclosure triggers, required labelling methods and responsibilities across the content lifecycle. It should differentiate between content categories, including text, audio, video and synthetic media such as deep fakes, and specify appropriate disclosure mechanisms for each.
The process should also define and justify applicable exceptions, including artistic or creative expression, human reviewed or edited content and uses permitted by law, and ensure that exception criteria are applied consistently and are auditable.






The organisation should assess and document when an AI system's nature is considered obvious to the user, thereby not requiring an explicit notification. This assessment should be based on the perspective of a reasonably well-informed and observant person, considering the specific context of use. The justification for forgoing the notification must be recorded.
AI systems legally authorized to detect, prevent, investigate or prosecute criminal offenses are exempt from this requirement if appropriate safeguards are in place to protect the rights and freedoms of third parties. An assessment will still be necessary if those systems are made available to the public for reporting criminal offenses, however.






The organisation should ensure each high-risk AI system is designed and documented to provide sufficient transparency for deployers to interpret its outputs correctly and use it appropriately.
The level of transparency must be proportionate to the system’s intended purpose and risk profile and must support compliance with the obligations of both the provider and the deployer under Section 3. Transparency measures should include, at a minimum:
The organisation should ensure that this information is kept up to date and made available to deployers in a clear and accessible format.






Providers should collaborate with reporting distributors or, where relevant, deployers of high-risk AI systems when investigating the causes of risks.






Providers of high-risk AI systems should ensure provide clear guidelines for human oversight of such systems. They should at least ensure the following:






The organisation should develop and maintain a clear and easily understandable notice for individuals exposed to an emotion recognition or biometric categorisation system. This notice must explicitly inform them of the system’s operation, its purpose and the processing of their personal data.
A process should be documented and implemented to ensure this information is provided in a timely and accessible manner, appropriate to the deployment context. The process should align with applicable Union data protection legislation, including Regulation (EU) 2016/679, Regulation (EU) 2018/1725 or Directive (EU) 2016/680, as relevant.
Where the system is used for the detection, prevention or investigation of criminal offences under applicable law, the organisation should document the legal basis and safeguards relied upon when assessing the applicability of this obligation.






The organisation should establish clear guidelines for informing users when they are interacting with AI generated or manipulated content. This ensures transparency and helps prevent deception. Topics can include, for example:






The organisation should define and document the specific conditions under which AI generated content is exempt from marking requirements. These guidelines should clarify what constitutes an assistive function, a non-substantial alteration, or a legally authorised exemption, ensuring consistent application of the rules.






The organisation should create and maintain comprehensive instructions for use for its AI systems. These instructions must be provided to deployers in a clear, complete, and accessible format.
The documentation should include at least:






Providers of high-risk AI systems the usage instructions for its systems include key operational and contact information. This helps deployers manage the system effectively and transparently throughout its lifecycle.
The instructions should specify:






The organisation should enhance the usage instructions for its high-risk AI system with key details about its data requirements and performance characteristics. This information helps deployers use the system responsibly and effectively.
The instructions should include:






The organisation should establish a process for informing its personnel about the deployment of any high-risk AI system at the workplace. This communication must happen before the system is put into use. The process should ensure that both individual workers and their representatives are informed in accordance with applicable labour laws and practices.






The organisation should establish a clear process for informing natural persons when they are subject to a decision made or assisted by a high-risk AI system. This process should define the content, timing, and method of delivery for the notification to ensure transparency and compliance.
The notification should clearly state an AI system is in use and explain its purpose and role in the decision making process. This information must be clear, concise, and easy to understand.






The organisation should define the content and structure for explanations provided to individuals affected by a high-risk AI system's decision. The goal is to ensure the explanation is clear, meaningful, and presented in an understandable way.
The guidelines should specify that an explanation includes:






The organisation should provide deployers with the necessary information and training to use high-risk AI systems safely. This is a key measure to mitigate risks that cannot be eliminated by design. The information must include all elements required by Article 13, such as the system's intended purpose, limitations, and performance. Where needed, training should also be provided that is tailored to the deployer's technical background and the intended use context.






The organisation should ensure that the instructions for use accompanying the high-risk AI system clearly state its performance. This includes specifying the accuracy metrics used for testing, such as precision or recall, and declaring the achieved accuracy levels for those metrics. This information helps users understand the system's capabilities and limitations.






The organisation should implement a verification step to ensure the CE marking is correctly applied to each high-risk AI system before it is made available. This check should confirm that the marking is visible, legible, and indelible. For digital systems, it must be verified that the mark is easily accessible through the user interface or other electronic means.






The organisation should establish a documented process for obtaining informed consent from individuals participating in the real-world testing of AI systems. Before testing begins, each participant must receive clear information about the test and provide their consent.
The information provided must cover the test's purpose, duration, and conditions, as well as the participant's right to withdraw at any time without penalty. The process for requesting a reversal of the AI's output and the official test identification number must also be communicated. A dated copy of the consent document should be provided to the participant.






The organization should identify and formally document the responsibilities for communicating relevant information about its AI systems to interested parties. This includes defining what information is shared, with whom, and who is responsible for the communication.






The effectiveness of AI awareness training is evaluated regularly. The evaluation may include e.g. the following perspectives:






The organisation should ensure that its high-risk AI systems are properly marked before being placed on the market or put into service. This includes affixing the CE marking to indicate conformity. Additionally, the provider's name, registered trade name or trade mark, and a contact address must be indicated on the system, its packaging, or its accompanying documentation.






The organisation should identify and document any legal exceptions to the obligation of informing individuals about the use of a high-risk AI system. This applies particularly to law enforcement contexts where providing such information could compromise investigations or public security, as specified in relevant regulations.
Digiturvamallissa kaikki vaatimuskehikkojen vaatimukset kohdistetaan universaaleihin tietoturvatehtäviin, jotta voitte muodostaa yksittäisen suunnitelman, joka täyttää ison kasan vaatimuksia.
.png)