How to regulate AI? Does a dedicated AI authority make sense?

The rapid rise of AI applications like ChatGPT has raised hopes and fears. Calls for regulation to limit adverse effects are growing louder. In this context, there are increasing calls for a dedicated AI authority. The reflexive call for a dedicated authority to regulate AI may be due to fears of possible loss of control, but it is a glaring miss of the point, partly because AI is a cross-sectoral issue. Setting up such an authority takes far too long, would come too late, would be inefficient, and would still create unnecessary redundancies with existing institutions. A cooperation model between established institutions (competition authority, regulatory authority, data protection, etc.) is most appropriate for a cross-sectoral issue. However, this only addresses the risk mitigation aspect. To promote innovation and research in this area, a holistic approach must be found that includes innovation policy and industrial policy in addition to sufficient budgets. Only in this way is Europe likely able to compete internationally in the AI field. AI regulation should be flexible and principle-based, especially for general-purpose AI. With scaremongering, bans, fear, and creating unnecessary new authorities, we only accelerate Europe’s way into an industrial and scientific museum.