What is the AI Act?
The EU AI Act is the first-ever comprehensive legal framework on AI worldwide. The aim of the act is to foster trustworthy AI in Europe.
The AI Act sets out a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI.
Who does the AI Act apply to?
The EU AI Act establishes obligations for providers, deployers, importers, distributors, and product manufacturers of AI systems with a link to the EU market.
How does the AI Act apply to Instabase?
Instabase provides an application platform that can be used to understand unstructured data and automate procedural tasks. Instabase has not built its own LLM, rather we rely on third-party LLM providers such as OpenAI. Instabase has used the official EU AI Act Compliance Checker interactive tool to determine whether our AI systems are subject to this Act. As a result of this evaluation, we have determined that Instabase provides General Purpose AI systems that pose no systemic risks.
Additionally, according to Recital 53 of the AI Act –
“It is also important to clarify that there may be specific cases in which AI systems referred to in pre-defined areas specified in this Regulation do not lead to a significant risk of harm to the legal interests protected under those areas because they do not materially influence the decision-making or do not harm those interests substantially. For the purposes of this Regulation, an AI system that does not materially influence the outcome of decision-making should be understood to be an AI system that does not have an impact on the substance, and thereby the outcome, of decision-making, whether human or automated
An AI system that does not materially influence the outcome of decision-making could include situations in which one or more of the following conditions are fulfilled. The first such condition should be that the AI system is intended to perform a narrow procedural task, such as an AI system that transforms unstructured data into structured data, an AI system that classifies incoming documents into categories or an AI system that is used to detect duplicates among a large number of applications. Those tasks are of such narrow and limited nature that they pose only limited risks which are not increased through the use of an AI system in a context that is listed as a high-risk use in an annex to this Regulation”
and from Article 3 Definitions-
‘systemic risk’ means a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain”
What are the relevant requirements for providers of General Purpose AI Systems?
All General Purpose AI providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training. Additionally, some General Purpose AI providers must ensure that AI systems intended to interact directly with humans are designed and developed in such a way that the humans concerned are informed that they are interacting with an AI system.
According to Article 50 of the AI Act –
“Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.“
Instabase’s product is branded “AI Hub” and the clients of our customers have no interaction with the product. Since the users of AI Hub are employees of our customers who are reasonably well-informed, observant and circumspect, there is no need for a notification within AI Hub to inform users that they are interacting with an AI system. This should be something that they are already aware of.
What steps has Instabase taken to ensure EU Act compliance?
Our long-standing commitment to transparency, in combination with existing processes have prepared us for the EU Act. Including:
- At any time, Customers can access technical documentation and instructions for use on our website and within the application. We share with our customers the system architecture, model architecture, and infrastructure requirements. Instabase communicates material model changes through release notes and documentations.
- Instabase complies with all relevant copyright laws in jurisdictions where it operates.
- In order to manage risks associated with AI and improve Instabase’s ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems, Instabase has incorporated the NIST AI Risk Management Framework (AI RMF) into its overall Security program.
- Instabase has published a whitepaper overview of its AI Model Risk Management Framework, available at https://trust.Instabase.com (the “Trust Center”).
- Instabase has developed an internal Artificial Intelligence policy to guide personnel in the secure use of AI.
- A multidisciplinary AI risk management team ensures that AI initiatives are developed and deployed responsibly, in compliance with relevant laws and regulations, and with ethical considerations in mind.
- Instabase has included AI controls in our Common Controls Compliance Framework.
- Instabase has a standard process to ensure model quality. Our process includes documenting model design and architecture, benchmarking model performance for each platform release, and providing a graphical customer interface for customers to easily examine model output applied to their data.
- Instabase supports a standard fairness test framework for customers to probe the model to see whether it produces a desired output
- Instabase supports proactive and continuous monitoring of AI models.