Guidelines for the use of artificial intelligence systems at the German National Library of Science and Technology (TIB)

(TIB AI Policy)

Status as of 19 November 2024

TIB supports the use and development of systems that operate with the help of artificial intelligence (AI systems). The following principles for the use of AI systems at TIB must be observed in order to ensure compliance with the requirements of the AI Regulation1 and to guarantee proper use.

I.

Responsibility: Users are responsible for complying with these guidelines when using AI systems. Clear responsibilities for the development and use of AI must be defined and ensured at an early stage.

II.

Handling: Before using an AI system, the categorisation of the system must be checked (Annex). Responsible bodies must be involved at an early stage.

III.

Transparency: The use of AI must be sufficiently transparent. The principles of good scientific practice must be observed in research and publication.

V.

Risk control: Any risks for TIB or users associated with the use of AI systems must be defined, made transparent and controlled at an early stage

V.

Security: Confidentiality and integrity of AI systems and the information used (in particular personal data, own/foreign secrets) must be ensured. Suitable measures must be taken to prevent any misuse of the AI systems used.

VI.

Data protection: The data protection officer must be involved before personal data is used in AI systems.

VII.

Intellectual property: Copyrights, industrial property rights and terms of use must be observed when using AI systems. Results must be checked to determine the extent to which they are similar to any original works. The Legal Office must be involved.

VIII.

Proportionality: The use of AI systems should be carefully weighed up. The financial and personnel costs must be taken into account in good time during the planning phase.

1 REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828

Annex to the TIB AI Policy: Overview of the use of AI systems

‘AI system’ means a machine-based system that is designed to operate with varying degrees of autonomy and, once deployed, demonstrates adaptability and, for explicit or implicit goals, derives from the inputs it receives how it can generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.’ (Art. 3 No. 1 AI Regulation).

AI systems are categorised according to risk classes in accordance with the AI Regulation:

 research systemsgen. working aidsgeneral purposehigh-risk systemsprohibited practices
definitionSystems developed specifically for the sole purpose of scientific research and developmentSystems to simplify or speed up work processes outside the scope of high-risk systemsSelf-developed systems/models with significant general usability that are offered publicly (only relevant for providers)Systems that can have an increased impact on safety, health or critical infrastructureSystems for arbitrary surveillance or influencing
examplesNon-publicly accessible systems used for research projectsSystems for subordinate auxiliary activities, e.g. use of ChatGPT for text creation, DeepL translation, WORD auto-correction

Systems/models with particular relevance, such as ChatGPT - provider is the company OpenAI

(see FAQ

Biometric identification, access control or assessments in the education sector, employee recruitment systems

Classification and

evaluation of people according to ethical characteristics and social behaviour

developmentpermitted*permitted*permitted*, but duties**permitted*, but duties**prohibited 
utilisationpermitted*permitted*not applicablepermitted*, but duties**prohibited
notesAlso applicable within co-operations with other research institutionsBe careful with open systems accessible on the Internet. Uploaded data can no longer be "retrieved"Exceptions for research and open sourceException: neither increased risk to health, safety, fundamental rights nor significant influence on decision-making processesExceptions for research
*Other legal obligations such as intellectual property and data protection must always be assessed separately by the Legal Office and data protection officer. Separate labelling obligations apply in particular to "deepfakes" or "interactions with users" (e.g. chatbots).
**e.g. risk management systems, documentation, logging, archiving, monitoring obligations, transparency and information obligations, human supervision obligations
Feedback