Artificial intelligence ('AI') is no longer just the evil antagonist in our favourite science fiction novels. With the rise of intelligent assistants, such as Apple's Siri or Amazon's Alexa, so-called 'weak' AI has already become an everyday household appliance. Yet, the potential impact 'strong AI' (i.e. a machine that has the capacity to understand or learn any intellectual task just like humans) might have on our society is still pure speculation.
From a legal perspective, even weak AI raises multiple questions that may require us to rethink fundamental legal concepts such as fault, accountability, product liability, non-discrimination, data protection, and many others. However, despite a vivid discussion amongst legal scholars, it seems that AI has still not gained the attention it deserves from politicians and legislators.
Just this week, the German "KI-Enquete" (an AI-focused committee of the Bundestag, see their website) presented the results of its two years of work. The recommendations are subdivided into three different topics: AI and economy, AI and the state, AI and health. A final report is expected to be published in autumn 2020.
In Austria, the government's approach towards AI was formulated in 2018 in a document entitled 'Artificial Intelligence Mission Austria 2030' (available here). In this document though, aside from multiple explanations of current and potential future applications of AI, clear guidelines and policies are missing. Since then, no AI-related policies have been communicated to the public.
Technical advancement does not wait until an appropriate legal framework has been developed. Quite the contrary, the law typically chases behind technological breakthroughs. In particular with the rise of AI, this sequence must change, as its impacts on the functioning of our society may be profound, affecting us all. Undeniably, AI needs rules – even in a small country like Austria – and better sooner than later.