The rapid growth of AI in the iGaming industry calls for advanced security tools to protect platforms

iGaming developers need to be able to test their software rapidly and thoroughly in order to introduce it quickly to a highly competitive market. With artificial intelligence (AI) now a main component of iGaming platforms, developers have to be protected better to ensure that the applications they roll out are safe and secure. Microsoft is helping, releasing an open-source tool called Counterfit that helps developers test the security of AI systems.

The Counterfit project has been published on GitHub and explains that a previous study it conducted found most organizations lack the tools to address adversarial machine learning (ML). “This tool was born out of our own need to assess Microsoft’s AI systems for vulnerabilities with the goal of proactively securing AI services, in accordance with Microsoft’s responsible AI principles and Responsible AI Strategy in Engineering (RAISE) initiative,” Microsoft says in a blog post.

The command-line tool is a “generic automation tool to attack multiple AI systems at scale” that the company uses to test its own AI models, and Microsoft is also exploring its use in the AI development phase. Counterfit can be deployed via Azure Shell from a browser or installed locally in an Anaconda Python environment, and the company promises that the command line tool is capable of assessing models hosted in any cloud environment, on-premises or on edge networks.

Counterfit is also model-agnostic and data-agnostic, applicable to development models that use text, images or generic input. “Our tool makes published attack algorithms accessible to the security community and helps to provide an extensible interface from which to build, manage, and launch attacks on AI models,” Microsoft adds.