Data governance for AI readiness in the public sector is essential

Data governance for AI readiness in the public sector is essential

This blog emphasizes the importance of modernizing the public sector faster to take advantage of AI and highlights the importance of having the right fundamentals – which must be anchored on robust data governance.

Introduction

It is undeniable that AI is rapidly integrating into all business processes at work or in play – and over time, our global intelligence from increased acquisitions will consolidate deeper insights that will be used for the well-being of citizens. and employees.

However, our public sector institutions are failing us in terms of the robustness of data governance and their readiness for AI in particular. If their modernization does not accelerate, our national AI security will continue to be affected.

What is Data Governance?

In the most basic terms, data governance is the process of managing the availability, usability, integrity, and security of data in enterprise systems, based on standards and policies. internal data which also controls data usage. Effective data governance ensures that data is consistent and reliable and is not misused. These practices are essential to provide the foundation for managing AI practices.

The World Economic Forum (WEF) report highlighted five key barriers affecting the public sector in advancing data in relation to AI practices, including:

1. Many public sector and government organizations have a basic understanding of their data.

2. Employees often lack the necessary AI and data management skills.

3. The AI ​​landscape is becoming increasingly complex and competitive.

4. Public sector employees are less encouraged to innovate and take risks.

5. AI algorithms require maintenance from specific vendors, which is an additional cost for public sector organizations.

The remainder of this blog addresses each of the five points raised in the WEF report, and also includes my insights and experiences in solving complex data governance issues in large organizations with a focus on preparedness for the AI.

Effective use of data

The volume of data is literally spiraling out of control as many organizations, and especially the private sector, were never designed to handle the sheer scale of data that hits them like constant tsunami waves of data. With so much rich data locked away in unstructured documents, inefficient research infrastructures, and the lack of centralized knowledge hubs, often only the building blocks are available to private sector organizations.

As the EU report so aptly indicates, the simplest questions often cannot be asked by public organisations.

The questions I like to ask our clients upfront when it comes to AI readiness are:

1.) Do you have a data steward who governs all of your policies, practices and infrastructure?

2.) Do you have a data governance operating model with clear management and operational metrics?

3.) Do you have a data journey improvement roadmap in place?

4.) Do you run any AI applications and how many datasets do they leverage?

5.) Do you know how many AI algorithms you have and when were they recently reviewed or externally audited? (This question usually creates quite a reaction – I have yet to find a C level that can report back to me in less than 24 hours – this question alone could lead to months of work and in some large organizations – it can simply be impossible).

6.) Do you have a data risk management process to manage your data assets?

7.) How many databases do you have? Do they own primary and secondary processes?

8.) What types of data are stored in these data repositories?

9.) Do you have a centralized data catalog that categorizes all databases so that all fields and entity relationships are well defined with clear domain owners?

These questions alone open rich conversations about the organization’s maturity in data governance and also create a clearer context about the challenges that lie ahead in applying AI methods.

Ultimately, data is what powers AI.

So if a private sector organization cannot clearly articulate how all of its data is collected, stored, and clearly defined with risk classes with strong data governance, tackling complex AI programs will be a challenge, but it’s also likely that any investment in AI will simply not be a sustainable operating process.

Employees often don’t have the right skills for AI and data management

Many public sector organizations do not have qualified, skilled and trained talent, and few have a data manager in place, or other qualified data process owners and business process champions, because it takes a whole culture to care about data as a strategic asset to steer data governance in the right direction.

Additionally, sourcing the required data and AI skills is in high demand, and often operating costs make this a major challenge for private organizations that can best outsource to qualified process management firms. AI-savvy business people like CGI, Deloitte, or my alma mater, Accenture, to name a few. But let’s also not forget that many smaller, more nimble companies often have deeper insights, because they are the ones that are constantly experimenting and innovating, so often a mix of talent is most optimal, and the costs will be significantly higher attractive. Balancing knowledge from various sources in the field of AI is always a wiser path.

Adding to these realities, government employees in non-technical roles, such as policymakers, department managers, and procurement managers, are typically not trained in the language of data and AI. There are also a huge amount of legal and ethical considerations impacting privacy and security when using AI solutions, and while strong ethical AI frameworks are in place, the legal frameworks are still emerging in the area of ​​AI and I think we will finally start to see more progress in legislation in the area of ​​AI in 2023/2024 given the ongoing global efforts and regulators filling in the gaps in the market.

Perhaps the biggest hurdle in private sector organizations is often that their functional silos are not streamlined into cross-functional work processes, and incentives are often not aligned to create integrated systems to operate as a unified, highly collaborative and agile. With cost restraints still a reality, smart organizational design is key to advancing the right behaviors because you can train everyone on AI and data, but if people’s practices aren’t collaborative and don’t work not effectively, new knowledge is rarely retained, let alone maintained.

The AI ​​landscape is becoming increasingly complex and competitive.

The global AI market size is expected to grow from USD 387.45 billion in 2022 to USD 1394.30 billion in 2029 with a CAGR of 20.1% during the forecast period. The AI ​​market is dominated by leading companies like: Alibaba, Amazon, Facebook, Google, Microsoft and thousands of small and medium-sized players innovating daily in every industry class. In other words, it is a market segment dominated by heavyweights and to stay current requires dedicated resources responsible for AI technology innovations and creating regular communication with different functions for the continuously educate on AI and make them aware of the strengths and weaknesses of AI. Most public sectors don’t have the operating budget to provide these sense-makers to the market ecosystem, which is another reason to partner with companies that have skills in these areas and rely on on their expertise to guide public sectors on how to modernize to be better prepared to take advantage of AI.

Public sector employees are less encouraged to innovate and take risks.

The public sector is notorious for not creating strong cultures of innovation because employees are not encouraged to take risks. An article in Apolitical states: Government risk incentives don’t really exist. If you achieve a major improvement in service delivery, you don’t get a pay increase or get promoted faster.

It is difficult to make AI a core competency in many public sector organizations because AI is a transformative technology that requires agility and often lots of experimentation and patience to succeed.

AI algorithms also require constant maintenance from specific vendors, which is an additional cost for public sector organizations.

AI models never stop and require continuous monitoring to ensure the models don’t drift and maintain predictive quality, with also the continued reality of increasing data which can improve the model and of course the reality that new datasets need to be cleaned to ensure they are free of data bias. Public sector organizations considering leveraging AI should plan for continuous maintenance lifecycles because unlike many other software products that follow a continuous logic, AI models evolve over time to be always relevant.

Conclusion

In conclusion, public sector organizations to get AI right can easily experiment and deploy AI solutions, but to build a strong and robust AI proficient organization, data governance best practices must be a platform fundamental form to modernize our public sector. This is one of the most important tasks to accomplish, as countries like China are already classified as a security threat to the United States due to the speed of their security management collection and classification practices. data. Canada has yet to make a firm statement on the security threat posed by China to our risks of modernizing public institutions.

We must understand that this is not only a threat to our national security, but more importantly, we will not be able to design and develop new product innovations as quickly as other countries, if our public institutions are unable to take this leap forward. .

Additionally, private sector leaders who effectively and efficiently advance AI have a responsibility to help our public sector institutions evolve and this will be the subject of my next blog.

#Data #governance #readiness #public #sector #essential

Leave a Comment

Your email address will not be published. Required fields are marked *