With its increasingly pervasive use, from smartphone apps to self-driving cars, artificial intelligence (AI) is touching every aspect of our lives. Recommendations made by AI are already directly affecting people’s daily lives, as well as how corporations and governments make decisions. However, if not designed and used properly, AI can present many potential risks. For example, AI used in social media can influence how people think and potentially sway political outcomes. AI is now even used in warfare to spread disinformation. In recent years, we have seen many incidents where faulty data sets or improperly designed AI algorithms demonstrated biases that adversely affected certain portions of the population. For example, we have seen AI recruitment systems that consistently scored female applicants lower or AI facial recognition systems that performed poorly for minorities. An AI system used in US state courts was shown to be racially biased, misclassifying black people as twice as likely to reoffend compared to whites. An AI health care risk-prediction algorithm used on more than 200 million people in the United States systematically referred white patients for extra medical care over black patients. Facebook’s AI advertising algorithm was found to be biased according to gender, race and religion. How I learned to stop worrying and love algorithmic advertising Europe wants to change all that and make itself a hub for trustworthy AI. European spending on AI will reach US$22 billion in 2022, according to research firm IDC. Especially after Covid-19, European companies recognise the need for AI automation to improve business efficiency and digital resilience. The European Union’s General Data Protection Regulation set a high bar on privacy and is recognised as the global gold standard. It now has a similar opportunity to set the standard for trustworthy AI through its new Artificial Intelligence Act. A key challenge for the act is to balance the need for safety and respect for fundamental rights without stifling AI innovation and growth. It does this through a “risk-based approach”, where AI systems are classified into risk categories depending on usage with compatible levels of responsibilities and obligations. For example, AI systems that violate fundamental human rights or exploit certain vulnerable portions of the population are classified as an “unacceptable risk” and will be prohibited from use. Most AI systems that are creating high value are probably in the “high-risk” category. These are systems whose decisions could have a big impact on human lives, such as systems for health care , law enforcement, education, recruitment, justice and credit scoring. These AI systems will need to comply with a range of requirements and obligations before use, such as ensuring adequate risk management, having quality data sets and avoiding AI bias. High-risk AI systems need to be continuously monitored with an audit trail for compliance assessment. Other AI systems such as chatbots are classified as “limited risk” and only need to be transparent about AI use. Systems that present “low or minimal risk” can be used without conforming to any additional legal obligations. Fines for non-compliance of the act can be high. The maximum penalty could be €30 million (US$32.9 million) or 6 per cent of the previous year’s turnover, whichever is larger. The Artificial Intelligence Act is applicable to all AI systems in the EU market regardless of whether the systems are established within the EU or elsewhere. It also applies to AI providers located outside the EU if the output produced by those AI systems will be used within the EU. The formalisation of such legislation is a great step towards building much-needed trust and confidence among users. But companies in Hong Kong need to move beyond high-level AI ethics and governance principles and start to back them up with concrete data, analytics and evidence to prove their compliance. In the coming year, I expect an increased use of tools to analyse AI models and identify potential AI biases, as well as AI auditing services. More companies will adopt machine learning operations to provide continuous insight and real-time monitoring that reduce risks and ensure ethical practices. Currently, Hong Kong has no AI-specific laws or regulations. However, last August the Office of the Privacy Commissioner for Personal Data released the “Guidance on the Ethical Development and Use of Artificial Intelligence” to help organisations understand and comply with the relevant requirements of the Personal Data (Privacy) Ordinance. The Hong Kong Monetary Authority’s “High-level Principles on Artificial Intelligence”, released in September 2019, provided guidance on AI governance and the need for ongoing monitoring and maintenance. The Hong Kong Institute for Monetary and Financial Research also released an “Artificial Intelligence in Banking” report in August 2020 which shared guidance and best practices on AI for banking. This was followed in October 2021 by a report called “Artificial Intelligence and Big Data in the Financial Services Industry”, covering AI best practices for the financial services industry. In September 2021, an AI governance expert committee in China published “Ethical Norms for the New Generation Artificial Intelligence”. This is China’s first set of guidelines on AI ethics, emphasising user rights and data control as part of its goal to become the global AI leader by 2030 . The document emphasised that people should have full decision-making power, with the right to choose whether to accept AI services and the ability to suspend AI system operations at any time, while also strengthening accountability. Andy Chun is a vice-president and convenor of the AI Specialist Group at the Hong Kong Computer Society and regional director, technology innovation, at Prudential