How to put AI ethics into practice in your company

In recent years, there has been a shift in companies, from just thinking Technical Effects For their technology now to recognize and try to develop solutions that guarantee their technology, including Artificial Intelligence (AI)Act responsibly. According to IBM Institute Artificial Intelligence Ethics Survey for Business Value Which included 1,200 CEOs in 22 countries in 22 industries, nearly 80 percent of CEOs are willing to take action to increase AI accountability. That’s up from just 20 percent in 2018. Awareness of the importance of AI ethics also extends remarkably across the board: 80 percent of respondents in this year’s survey cited a non-technical CEO as the primary “champion” of AI ethics, Compared to 15% in 2018.

This is encouraging progress. However, much work remains to ensure that the benefits of AI support all individuals equally and successfully. Today, nearly 85 percent of companies feel it is important to address the ethics of artificial intelligence, according to IBV study data. However, only 40 percent of consumers said they trusted companies to be responsible for developing new AI applications — the same percentage that said they trusted companies in 2018, nearly four years ago.

The benefits of artificial intelligence continue to grow.

Artificial intelligence has transformative potential. In 2021, “AI augmentation,” defined as “a human-centred partnership model between people and artificial intelligence working together,” will create an estimated $2.9 trillion in business value, according to the Gartnerand saved an estimated 6.2 billion hours of worker productivity. As investment and adoption continues to grow exponentially, along with the development of zero-token or low-code solutions that allow people to customize their AI without extensive technical knowledge, AI will continue to become more accessible and accessible to the masses. AI is able to augment human capabilities in many areas, from research and analysis to completing basic daily tasks such as managing calendar and finances.

AI also allows us to think more about what is possible. It took scientists more than 30 years to manually map a file 3.1 billion base pairs From the human genome – a critical project essential to understanding how to treat complex conditions and sustain human life. Now, by combining artificial intelligence with human intelligence, we can simplify and speed up similar processes and successfully tackle today’s most pressing challenges.

And we have already seen that AI is launching great breakthroughs: in the past year, Researchers at IBM, Oxford, Cambridge and the National Physical Laboratory Show how AI-designed antimicrobial peptides interact with computational models of the cell membrane, a development that could have broad implications for drug discovery.

Making sure AI is trustworthy is a balancing act — but it’s worth it.

While the promises of AI are great, there are also pitfalls if we do not ensure that it is trustworthy, i.e. that it is fair, explainable, transparent, robust and respectful of our data and insights. The definition of “untrustworthy AI” may be obvious to most people: discriminatory, opaque, abused, and not meeting public expectations of trust. However, developing trustworthy AI can still be challenging given the sometimes required practical balancing act: for example, between “interpretation” – the ability to understand the rationale behind the results of an AI algorithm – and “durability” – accuracy The algorithm arrives at the result.

Organizations can no longer embrace AI without addressing these trade-offs and other ethical issues. The question is whether or not they confront them strategically, purposefully, and thoughtfully. It certainly won’t be easy. But, there are concrete steps companies and organizations can start taking now to move in the right direction.

Place AI ethics practices in the appropriate strategic context.

As with any large-scale initiative, implementing AI ethics begins with defining the right strategy for success. Consider the importance of building trustworthy AI to business strategy and goals: What are the key value innovators that can be accelerated with AI? How will success be measured?

It is also important to consider the role of AI innovation in an organization’s growth strategy and approach – is the organization a “leader” who constantly pushes the boundaries of putting new technology into practice, or a “fast-tracker” who favors more tested approaches? The answers to these questions will help define and codify key AI ethics principles and determine the human-machine balance in the organization.

Develop a governance approach to implementing AI ethics.

The next step is for a company to create its own AI ethics governance framework. This begins with integrating a full range of perspectives (for example, business leaders, customers, customers, government officials, and society at large) on topics such as privacy, robustness, fairness, explainability, and transparency. It also means ensuring a diversity of identity and perspective: new research by IBM shows that there are 5.5 times fewer women in AI teams than in the organization, 4 times fewer than LGBT+ individuals, and 1.7 times fewer women than black, Indigenous, and people of color. (BIPOC).

Establishing the right governance framework also requires companies to consider defining their own AI data risk profile, as well as an internal structure, policies, processes, and system for monitoring AI ethics both internally and externally.

Integrating ethics into the life cycle of artificial intelligence.

Finally, AI ethics is not a “set and forget” process. There are a number of additional steps that need to be taken once an organization has established its governance and maintenance system. First, it must continue to communicate with internal and external stakeholders on the topic, as well as capture, report and review compliance data. He must also lead and support the education and diversity efforts of internal teams, identifying integrated methodologies and toolkits advocating AI ethics principles.

Artificial intelligence will only find its way into our daily lives – it must be developed responsibly and in a way that ensures that ethical principles are at the heart of technology. Fortunately, the guide to AI ethics is becoming clearer, more practical, and more realistic. But it is up to all of us — across industry, government, research, academia, and society at large — to stand up for it.

The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.