The Office of the Auditor General of Norway has a Chief Data Scientist.
Artificial Intelligence and Machine Learning are being used to improve public services and reduce costs.
There are new challenges and risks associated with this technology, such as data security, the possibility of automated and institutionalized treatment, and mass production of incorrect or discriminatory decisions.
Special performance or compliance audit cases are usually performed for applications that are based on artificial intelligence and machine learning. The need to incorporate IT audit elements is signaled by the fact that the models tend to be embedded in broader IT infrastructures.
Public auditors do n’t have a lot of guidance on how to audit artificial intelligence and machine learning. The Office of the Auditor General of Norway collaborated with data science colleagues from the SAIs of Germany, the Netherlands and the United Kingdom to develop a white paper on machine learning.
The paper is available online. auditingalgorithms There are risks associated with using artificial intelligence and machine learning in public services. The white paper suggests an audit catalogue that includes methodological approaches for AI-application audits.
Some of the key points are touched on in the article.
Project management and governance of artificial intelligence systems requires specialized technical knowledge of models.
Similar to any project management audit, the development of an artificial intelligence system has to be audited. If a government agency introduces artificial intelligence in a specific setting, it is a good idea to ask, “ Is there a clear goal on the desired achievement ? ” and “ Is there a sustainable structure to maintain the model once the consultants leave ? ”
To alleviate the need for specialized skills, the agency needs to have adequate documentation of model development and personnel who understand the model.
Data quality is important, but it ‘s not always important in modeling. Data that is biased can lead to flawed results.
Performance metrics will most likely be inflated if the same data is used to build the model and verify performance during testing or validation. When used on new production data, this overfitting leads to performance loss.
Privacy and the use of personal data are important data considerations. Data minimization is a central principle of the General Data Protection Regulation, which was instituted by the European Union. When training or testing models, this equates to limiting the use of personal information. A good rule of thumb is to minimize the use of personal data in countries with different regulations.
Model development can be easily tested by an auditor with sufficient knowledge of artificial intelligence and machine learning.
The documentation should include a well- structured and well-commented codebase, extensive records of hardware and software used, and explanations as to how the model will be maintained once put into production.
If a hard-to-explain model is used, it ‘s important that the selected artificial intelligence or machine learning program be well-articulated. Auditors can use training and testing to verify the model that was chosen.
The forefront of model development is fairness and equal treatment.
Data used to build a model may be biased. Group-based fairness requires models to treat different groups the same way. It can be a bit more complex. Group-level demographic disparity can result in misleading predictions if the data is used to train an artificial intelligence model.
Artificial intelligence models that are based on biased data can lead to distorted results and lead to the creation of even more prejudiced conclusions.
Huge rewards can be provided by using artificial intelligence and machine learning in the public sector. Failure to deploy can damage democracy and the social fabric by potentially promoting discrimination and unfair treatment on a large scale.
It will be important for public auditors to address the challenges posed by this technology.
The purpose of the white paper is to help auditors become better equipped to face the challenges of auditing machine learning.