Mention of the word “Artificial Intelligence” has come to instill fear in the hearts and minds of many ordinary people, and this set my curious mind on a discovery mission.
The timing is apt because of the COVID-19 pandemic, which is resulting in the decimation of many jobs around the world, and which is surely fueling 4IR.
Therefore, bad dreams about the boogeyman called AI are going to become even more real in the lives of many people in or looking for jobs, and who wonder if they will ever be able to be employable again in the face of machines that will be running rampant, taking over the remaining jobs.
Table of Contents
The inspiration for this eNsight
It has been my observation that many people talk about AI as though it is a future event, something that is still coming, and for which there must be a count-down.
This reminds me of the Y2K event that the whole world was gearing up for, and that became an anti-climax at 00:00 on 1 January 2000.
Nothing happened. Stock market computers did not crash. Planes did not fall from the skies.
Except in the case of AI, many don’t realise it has been here for a long time.
Why do I fancy the challenge of sharing thoughts on artificial intelligence?
I believe I am best placed to share my thoughts on AI through this new series, because I am not an expert on the topic.
I shall be using my own digital technology experience and common sense to advance my thoughts, with a view to assisting in the framing of the discussion about the topic, and hopefully contribute in the dispelling of some of the unfounded fears about it.
So, here goes.
From the get go, note that artificial intelligence is as old as the arithmetic machine
Despite the popularity that this phrase has gained in the last 5-10 years, artificial intelligence is as old as the Pascaline.
What is the Pascaline?
According to Encyclopedia Britannica, it is the first known arithmetic machine that could be used at scale – at the time – on record.
The Pascaline was designed and produced by the French mathematician Blaise Pascal between 1642 and 1644.
Relationship between the Pascaline and artificial intelligence?
Technical definition of AI
According to Wikipedia,
“Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term “artificial intelligence” is often used to describe machines (or computers) that mimic “cognitive” functions that humans associate with the human mind, such as “learning” and “problem solving”.”
My common sense definition of artificial intelligence
Distilling from Britannica’s description of the Pascaline and Wikipedia’s definition of AI, my common sense description of the latter is “the act of using machines to solve problems that involve human intelligence, but where the data are too large to compute using the human brain.”
So, in its most basic form, artificial intelligence came into being as soon as humans needed to compute data that were mentally taxing for the human brains, and this led to the requirement for the development and use of machines for such computations.
Developments in machine computations
As with the many human inventions that have since been and continue to be improved; including electricity, automobiles, and flying machines; they [humans] are also improving computer capacity and software to perform increasingly complex computations that also accommodate volumes of data that keep growing exponentially.
One of the most common terms used as part of AI is machine learning.
As can be expected, many of the AI developments involve the ability to produce learnings that can be used to instruct other machines – the robotics – to perform repetitive functions that do not have unexpected variations and thus can be automated.
The morale of the artificial intelligence story
AI has been with us for centuries.
The only difference is that it was not called by this title back then.
A lot of the technology developments that we see today would not be possible without AI.
In return, these developments are informing improvements in AI for even more complex computations and machine learning.
AI has been good for human development.
Therefore, we do not have any reason to fear it.
In embracing this phenomenon, we can only become better at identifying and/or developing new careers, to ensure these are in line with the current and future technology developments and job requirements.
What is your level of fear about artificial intelligence - before and after reading this eNsight?
Again as I said, I want to develop eNsights about this topic using my common sense and share my thoughts from a non-expert’s point of view, and thereby avoid the technical jargon and the perpetuation of AI’s boogeyman syndrome that has come to grip so many of us.
So, where is your level of fear about this topic now?
The next eNsight about artificial intelligence
In the next eNsight in this AI series, I shall be sharing my thoughts on the simple criteria you can use to assess the level of danger your job faces due to AI.