Today at the United Nations Internet Governance Forum in Kyoto, Japan, I made a clear statement on the future of global AI regulation with respect to cybersecurity: We do not need AI-specific cybersecurity legislation, but we need to distinguish three key use cases to better order and understand the current global policy debate: 1. AI is used to improve cybersecurity; 2. AI is used to compromise cybersecurity; 3. AI is developed to improve cybersecurity. When AI is used to improve cybersecurity, it is technically one of several possible measures that can improve cyber resilience. European lawmakers, who currently lead the world in cybersecurity legislation, have so far avoided exclusively designating specific technologies to realize an appropriate level of cybersecurity and have used the „state of the art“ of technology as a guideline. This is a reasonable approach, because a law will never be able to conclusively map the technologies needed to improve cybersecurity in the sense of a casuistry due to the rapid technical development cycles. If cyber attackers use AI to compromise IT systems, this is also not a specific AI scenario, because here, as in the defense against cyber attacks, attackers may well use different technologies. There are criminal offenses for this in cyber criminal law in various countries around the world. Developing AI to improve cybersecurity, on the other hand, is more problematic because it must be ensured that AI systems are not compromised even at this stage. This is also what the European AI Act, for example, seeks to achieve when it stipulates that AI itself must be cybersecure. Therefore, developers of AI must provide safeguards to prevent, for example, manipulation of training data sets or to prevent hostile input to manipulate AI’s response.
#unitednations #internetgovernanceforum #igf #cybersecurity #ai #kyoto #japan #denniskenjikipker