In the rapidly evolving tech industry sector, artificial intelligence (AI) stands at the forefront of innovation. Although stakeholders active in the technology sector represent the natural leaders of AI and are adopting AI to a greater extent than companies related to other industry sectors,[1] they are faced with an increasingly complex international regulatory landscape.
AI use cases in the tech sector – examples from Germany and Australia
Use cases are plenty and include everything from customer chatbots to space tech. In hands-on examples, DLA Piper has provided legal advice on a variety of AI development, AI implementation and AI operation scenarios in compliance with applicable legal framework conditions, in particular in the mobility, payment, insurance, automotive, robotics and space tech industry sector. As use cases at hand often do not trigger AI specific, but also sector specific regulation (e.g. in financial services and insurance), proper legal advice truly elevating client’s business often also requires cross-practise and cross-border expertise which DLA Piper is in the position to deliver on a global scale. As a key technology, AI driven solutions can also trigger specific requirements at the intersection of critical infrastructure regulation, e.g. based on EU statutory laws on network information security (e.g. NIS, DORA, etc.) or national legislation, such as Germany’s Information Security Act (“IT-Sicherheitsgesetz”), general law, such as on software, copy and personality rights but also on confidentiality, know-how and business secrets to be considered.
AI regulation in the EU – the landmark AI Act
The European Union has taken a bold leading role in AI regulation worldwide. EU AI regulation revolves around the so-called AI Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence) which entered into force on 1 August 2024. The AI Act takes a risk-based approach, assigning AI systems to one of four risk categories (prohibited AI practices, high-risk AI systems, AI systems with limited risk, AI systems with minimal or no risk), with each governed by the rules of their respective category.
One key term is the definition of “AI system”. According to the AI Act, an AI system is “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”[2]. This definition aligns with OECD’s guidelines[3] and emphasizes the system’s ability to generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.
Depending on specific AI classification under the AI Act, different requirements and obligations apply. High-risk AI systems are subject to extensive requirements, including regarding data and data governance, record-keeping and human oversight, while their providers and deployers must fulfil various corresponding obligations, depending on their exact role. Limited-risk AI systems may be subject to transparency obligations, while AI systems with minimal or no risk may even be free of all requirements. AI practices considered to have unacceptable risk, such as AI systems using purposefully manipulative or deceptive techniques, are prohibited.
So-called “general purpose AI models” (GPAI models) may attract their own set of obligations, such as specific documentation and information obligations. GPAI models are defined as AI models “that display significant generality and are capable of competently performing a wide range of distinct tasks regardless of the way they are placed on the market and that can be integrated into a variety of downstream systems or applications”.[4] Depending on whether the particular GPAI model qualifies a GPAI model with or without systemic risk, obligations to be fulfilled and measures to be taken may differ.[5]
Infringements of the AI Act can result in fines, in case of non or just insufficient compliance with the prohibition of certain AI practices, of up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher. Infringements such as non- or insufficient compliance with transparency or documentation obligations may incur fines of up to EUR 15 million or 3% of turnover. The supply of incorrect, incomplete or misleading information to national competent authorities can be subject to fines of up to EUR 7.5 million or up to 1 % of turnover. EU Member States may implement additional sanctions via national legislation.
AI regulation in Australia – a tentative and phased approach
In contrast to the EU, Australia’s federal government is taking a more deliberate and phased approach to regulating AI. While there are currently no binding AI-specific laws in place, momentum is steadily building toward a future regulatory framework that reflects global trends and acknowledges the cross-border nature of AI technologies. Even in the absence of formal legislation, emerging standards are already shaping organisational behaviour, particularly in the areas of procurement, governance, and risk evaluation. The takeaway is clear: prepare early or risk being left behind in a rapidly evolving regulatory environment.
In September 2024, the Australian government released a proposals paper outlining 10 mandatory guardrails for high-risk AI applications. These guardrails emphasize, throughout the AI lifecycle, human oversight (guardrail 5), transparency and explainability (guardrail 6), testing and monitoring (guardrail 4), data governance (guardrail 3), and accountability (guardrail 1). The focus is on ensuring that AI systems do not infringe upon human rights, cause physical or psychological harm, or lead to significant legal or societal impacts. Complementing this, the Voluntary AI Safety Standard was introduced, providing organisations with guidelines to responsibly develop and deploy AI systems, applicable to AI systems of any risk level.
The need for worldwide expertise
While the EU and Australia (and indeed many other jurisdictions) have taken a diverging regulatory approach, many principles are converging and compliance is essential regardless of jurisdiction. In particular, companies active across national borders must not underestimate the challenge posed by AI. All in all, AI governance is becoming a strategic and success differentiator, not just a compliance obligation. For tech companies, understanding regional differences and ensuring compliance is essential for successful global operations.
[1] For more insights from the technology sector, see our AI Governance Report here.
[2] Art. 3 No. 1 AI Act.
[3] Cf. Explanatory Memorandum on the Updated OECD Definition of an AI System, p. 6, accessible at https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/03/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_3c815e51/623da898-en.pdf (last accessed 3 June 2025).
[4] Art. 3 no. 63 AI Act.
[5] Cf. Art. 53, 55 AI Act.
[View source.]