Artificial Intelligence (“AI”) is gradually moving to the forefront of IT application areas as new and innovative business application areas open up.
However, all is not plain sailing with Ai. Many individuals distrust it, and distrust businesses using it.
The issue with AI is that it can be used for automated decision making without humans being able to see or understand the basic algorithms and data that it uses to make the decision.
Some academics have expressed a concern that at some point in the not-to-distant future Artificial intelligence will exceed that of humans, and humans will be relegated to second class citizens in a cybernetic world. They call it “The Singularity”, and current projections are that it will happen in around 2030.
Here are 5 areas that must be addressed when a business is considering using Artificial Intelligence:
One major problem that needs to be addressed is that the opacity of many AI decision making systems prevents their deployment. People can be denied access to the algorithms because they are “proprietary” to the organisation that has developed them, or in some cases, they are not accessible because the AI system itself has developed them as it receives data.
In some jurisdictions, for example, the EU and it’s GDPR or California with it’s CCPA, this can be a major issue. Under EU regulations, an individual has the right to receive “meaningful information” about the logic processes used in matters that have a “legal or similar legal effect” on them. The rejection of a credit application is a good example.
If that information cannot be or is not provided for any reason, the EU will forbid the use of the AI application.
Data Troubles and Apparent Bias
AI is a development of Machine Learning. It uses training data culled from Big Data databases, the Web, social media, and anywhere it can find it to maintain its algorithms, and over time improve their accuracy. However, this can be inherently unsafe, since the source databases themselves may exhibit prejudices and other bias factors in the training data.
This issue is intimately linked to the quality of the data sources used as training data. As stated above, bias in the data can deliberately or inadvertently cause the AI system to make decisions that are correct according to the algorithm, but incorrect according to a human observer.
There have already been several cases, including Amazon where it has been conclusively demonstrated that their AI processes discriminate against minorities.
Until these apparent biases are addressed and removed or explained, the level of trust in AI systems will remain low.
This is a bit of a two-edged sword. Cyber-Security organisations use AI to improve the technologies providing cyber defences. Data is at the core of cyber-security. What better way is there to analyze data than to have AI doing in nanoseconds tasks that would take people significantly longer.
On the other hand, AI can serve as a new weapon in the arsenal of cybercriminals who use the technology to hone and improve their cyberattacks. Seemingly innocuous information sources can be linked to create problems, for example, creating false identities.
Businesses need to be aware of new threats developed using AI technology and keep their defences up to date to protect against them. One danger is to treat AI risks together with other security risks and underestimate their potential.
Not many software hacks have the potential to cause loss of life (AI-assisted medical procedures, self-driving cars and aeroplanes) or start a war by feeding disinformation into military systems.
Loss of Skills
An unintended side-effect of AI is a loss of skills as decision-making process are operated by AI bots. One commentator looks at the Pareto distribution as a model. Eighty per cent of all decisions will be made using AI, with only twenty per cent having human interaction.
AI is now being increasingly used in the democratic process to analyse campaign programmes and voting results, by being able to identify and target groups of voters and predict their behaviour almost to an individual level.
This is a serious risk to electoral systems. If the voter’s register is manipulated by AI, the information presented to the electorate is deliberately skewed, or the votes cast manipulated, then the entire electoral process is flawed.
There is considerable suspicion, but little provable evidence that the electoral process has been subverted by AI activities recently, for example in the 2016 Brexit referendum in the UK and in the US Presidential Election that brought Donald Trump to power.
Media speculation has done little to increase trust in AI in this regard.
AI has tremendous potential for business to develop new and effective data processing methodologies that will ease our existence by removing drudgery from our lives, but business needs to address customer acceptance issues, in short, to become much more open and transparent for full acceptance.