Artificial intelligence (AI) continues to be a huge growth driver for businesses of all sizes undergoing digital transformation, with IDC predicting that AI spending will reach $500 billion by 2023. AI helps organizations identify new business models, increase revenue and gain competitive advantage.
But along with these opportunities, AI also brings immense responsibilities.
AI systems and pipelines are complex. They present many areas of potential failure, ranging from problems with the underlying data, to errors in the tools themselves, to results that negatively affect people and processes. As AI becomes ubiquitous, prioritizing its responsible use can be the difference between growing revenue streams and ensuring individuals and communities are safe from risk and harm.
Based on these heightened risks, legislators are intervening globally with guidelines on AI systems and their appropriate use. The latest comes from New York, which says employers using AI hiring tools will face penalties if their systems create biased decisions. As a result of these decrees, technology builders and governments must strike a delicate balance.
[ Also read AI ethics: 5 key pillars. ]
By following this simple three-step process, organizations can better understand responsible AI and how to prioritize it as their AI investments evolve:
Step 1: Identify common points of failure; think holistically
The first step is to realize that most AI challenges start with data. We can blame an AI model for incorrect, inconsistent, or biased decisions, but ultimately it’s data pipelines and machine learning pipelines that guide and transform data into something actionable.
As we think more broadly, stress testing can involve multiple points based on raw data, AI models, and predictions. To better evaluate an AI model, ask yourself these key questions:
- Was an undersampled dataset used?
- Was he representative of the classes he trained with?
- Was a validation dataset used?
- Was the data labeled accurately?
- When the model reaches new runtime data, does it still make the right decisions since it was trained on slightly different data?
- Was the data scientist equipped with the proper training guidelines to ensure they correctly select features to train and retrain the model?
- Has my business rule taken the appropriate next step and is there a feedback loop?
By taking a broader perspective, organizations can think about safeguards around people, processes, and tools. Only then can a robust and adaptable framework be built to address the fractures in the systems.
Step 2: Build a multidisciplinary framework
When it comes to creating a framework based on people, process and tools, how do we apply this to the complex data and AI lifecycles?
Data is everywhere – on premises, in the cloud, at the edge. It is used in batches and in real time in hybrid infrastructures to enable organizations to deliver insights closer to where that data resides. With low-code and automated tools, machine learning algorithms are sophisticated enough to allow users of different skill levels to build AI models.
A strong technology foundation is essential to meet the changing needs of AI-enabled workloads, as well as the AI technologies themselves – libraries, open source tools, and platforms. Common abilities found in this foundation include, but are not limited to:
- Detection of biases in the training and deployment of the AI model
- Drift detection in model deployment
- Explainability for both model training and deployment
- Anomaly and intrusion detection
- Governance from data to policy management
More advanced platforms have remediation paths for most of the above features to ensure you have a path to process and remediate beyond detection.
However, a technology-based approach is not enough. Responsible AI requires a culture to embrace the technology. It starts with leadership teams setting company-wide goals to raise awareness and priority. Steering committees and officers dedicated to the responsible AI mission are essential to its success. It is increasingly common for organizations to hire chief AI ethicists to orchestrate internal business operations and external social mandates and responsibilities. They can often act as intermediaries to help manage corporate social responsibilities.
Step 3: Invest, develop skills and educate
In the last step, we must have a bias (no pun intended) towards action.
To face the technological challenges associated with AI, it is important to invest in a diverse skill set, which is not an easy task. However, even training internal teams on content developed by big tech companies like Google, IBM, and Microsoft can help build internal skills.
It is also important to educate legislators and government officials. By regularly informing and coordinating with local and federal agencies, policies can keep pace and pace with technical innovation.
With AI becoming more ubiquitous, it is more important than ever to establish safeguards, both for technology and for society. Organizations looking to understand, build, or use AI must prioritize responsible AI. These three steps will allow leaders to promote responsible AI within their organizations, to lay the foundations for it and to sustain it.
[ Want best practices for AI workloads? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]
#steps #prioritize #responsible