Originally published: December 2016
Part 1 of 2
Hello again! It’s been a while, but with the flurry of stories surrounding the Presidential election, I made a conscious decision to stay away from writing. As of this past weekend, many of you have heard of further claims of foreign interference in the election. And, as the title of this post suggests, I will not be talking about that!
Fake news, foreign interference, protection of information, conflicting reports, ascertaining intent, spearphishing attacks, typos, and so on will be talked about in one of my later posts, probably early 2017. Despite the hype, I’m actually trying to let the dust settle a bit, in order to present a clearer picture (I hope).
Some of you follow and read the ever-brilliant Paul Ferrillo, who is a big proponent of Artificial Intelligence and Learning Machines (herein AI/LM). The two of us go back and forth on this issue quite a bit and, for the most part, we have areas of agreement (and of course, some disagreement). Paul has certainly given me a lot to think about regarding AI/LM, but not necessarily in the sense of what AI/LM’s role in the cyber and information security domain is. That part is fairly self-evident. AI/LM are there to help, namely: process raw/big data into intelligence, thwart threats, reduce vulnerabilities, and assist in the response and recovery phrases. All pretty straight forward.
Rather, my concern is: what is the role of the human in an AI/LM dominated environment?
Full disclaimer, in the “era of fake news” I feel responsible to do this: I do not have a crystal ball and have no idea where this will go, therefore, what I present below are just thoughts, opinions, and hypotheses, with history acting as a teacher.
This is a two-part post, focusing on two questions and will be relatively informal. Perhaps these posts are little bit more philosophical in nature as well, focusing on the 100,000 foot problems we will face in the near future and not on how to stop your next DDoS attack.
Here goes with the first question…
Will our reliance on technology also be the death of us?
…or at the very least, cripple the way of life many of us have become accustomed to. In order to do this, we need to ask ourselves this: will AI/LM be a tool or a crutch?
My initial instinct is that – “at the beginning” (now/today/2017) – AI/LM will be a very useful, and necessary, tool. From a purely logistical and resources point of view, it’s a tad ridiculous to think that humans alone can process the 2.5 quintillion gigabytes of data that business create per day (thank you Paul for the figure).
What is a Quintillion?
For those unsure of how to process that number, try looking at it like this: your average digital version of a two-hour movie in 720p HD averages about 4GB, give or take a bit. How much business data we generate? 2,500,000,000,000,000,000 GB/day. In other words, you’re looking at about 625,000,000,000,000 two-hour movies per day.
Or let’s put it another way. Assume for a moment all 7,000,000,000 people in the world miraculously turned into a bunch cyber sleuths with the snap of a finger (and everything was happy-happy-joy-joy in the universe) focused entirely on processing the data we generate. That would mean each person is responsible for processing 357,142,857 GB per day…or put another way, asking to somebody to watch 89,285,714 two-hour movies simultaneously.
And while you are watching all these movies simultaneously, you actually need to know what’s going on and keep track of things, like learning all the characters, figuring out the plot, deciphering intent, and filtering through all the noise that doesn’t matter, like why a street lamp is out of place in one scene…except that a street lamp being out of place may be something you need to look out for, without any prior knowledge that you need to look out for that street lamp out being out of place… and that EVERY movie has a sequel…
Tired yet?
Writer’s note: there is a real chance I may have gotten some of this math wrong because the numbers are really dizzying! My point is that you need to go through A LOT of information.
So now that you have sufficiently got your head around how big a quintillion is, I think you’ll agree with Paul (and I) that some AI/LM is probably needed to process all that data. There’s a great piece from 60 Minutes on IBM’s Watson available here that shows how you can go from the gameshow Jeopardy! to fighting cancer by processing huge amounts of data using AI/LM.
Okay, I get it, I need AI/LM
Fine, so we agree that AI/LM is a necessary tool today. And I suggest to you it is just a matter of time until we reach this inevitability: AI/LM will become a crutch.
Now, whether you believe this is a good or bad thing very much depends on your philosophical stances and what your departure points of how cyber and information security should be dealt with, all within the larger concept of life.
So, before we go all-out AI/LM dependence, let’s take a few steps back and look at a different technology we have come to depend on in our daily life. Depending on what you consider “the truth” to be (where, where, who, in this case), there was an invention created about 5,000-6,000 years ago (give or take a bit). We use that invention all the time. In fact, we are using it right now. And the name of that invention is: writing.
What Does Writing have to do with a Human’s Place in an AI/LM Dominated Environment?
I have always considered “writing” to be a technology and in its most literal sense it is. “Writing” is a series of skills and methods used to produce something or complete an objective. Don’t believe me? Check out the Webster dictionary meaning of the word: technology.
Now, let’s talk about this dude name Socrates. Some of you may have heard of him. He lived about 2,500 years ago, a year here, a year there, we suspect. There is also this other dude you may have heard of. His name is Plato. A great deal of what we know about Socrates came from Plato’s writings, in the form of dialogues, a type of prose, common to that time period.
One such dialogue, written in approximately 370 BC, is called The Phaedrus. This piece recounts a series of dialogues between Socrates and Phaedrus, an ancient Athenian aristocrat (trust me, this all relates to AI/LM, just give it a moment!). During the dialogue, Socrates recounts the story of the King Thamus of ancient Egypt being presented the gift of letters (“knowledge”) from Egyptian god Theuth. Socrates said:
This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
Short, crude, inelegant (possibly unintelligent) version: words will destroy your memory and may even make you dumb, not more knowledgeable.
With all apologies to Plato, how about I update that dialogue (changes in bold). Instead of the gift of letters, Theuth will present Thamus with the gift of AI/LM.
This, said Theuth, will make the Egyptians safer and give them better security; it is a specific both for the personal and professional user. Thamus replied: O most ingenious Theuth, the parent or inventor of a technology is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of AI/LM, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create sloppiness in the user’s habits, because they will not use what they have been taught about cyber and information security; they will trust to the external AI/LM and not make safe and secure decisions for themselves. The specific which you have discovered is an aid not to security, but to provide a comfort blanket, and you give your disciples not safety, but only the semblance of safety; they will be users of many Internet things and will have learned nothing; they will appear to be confident in their security measures and will generally know nothing; they will be unsafe users to themselves and to others, having the show of safe and secure Internet usage without the reality.
All Roads Lead to AI/LM Becoming a Crutch
Okay, so before anybody starts saying, “George, are you saying writing is a BAD thing?” let me explain. No, I do not think writing is a bad thing. I think it’s a fantastic thing (how else would I be reaching you?). But, what I think we need to be weary of is figuring out what our place – as humans – will be in an AI/LM dominated environment before we all go gangbusters turning on these machines, because all indicators show we are going that way. Technological development, cost reduction of inputs and manufacturing, demand, and economies of scale all suggest it’s just a matter of time before AI/LM become a household consumer product (more of that in the next post).
And something of particularly worthy consideration is that those who have become dependent on technology in their daily life, particularly over a period of time (like millennials in the West), stand to lose disproportionally more than those that are just starting to adopt various technologies (how do you miss a smartphone if you’ve never had one?).
This is why I believe that “writing” – as a technology – provides the perfect example of whether we use technology as a tool or a crutch. This is not to say that our memories are no longer capable of remembering the amount information they would have in the pre-writing days, it’s just that a few hundred or so generations have passed since we have used this skill as elaborately as we use to (much like other survival skills). Some of us Neanderthals still do basic arithmetic in our head, while there is a clear shift that people can’t figure out a 15% tip at the restaurant without the assistance of their iPhone’s calculator…wait, there’s an app for that right?…because I don’t know what ÷ or / means or how to use those funny looking symbols…maybe I should just let the restaurant add the tip to my bill, that’s easier right?
Of course, I’m only partially joking, as I am still a firm believer that as humans we should retain certain survival skills, whether they are in the natural domain or the cyber domain (feel free to discuss what you think these necessary survival skills should be as everybody has their own list).
When More is Not Better
There certainly was a time where writing was a necessary tool in the development and progression of knowledge. Writing is required to pass on insight and wisdom from generation-to-generation. We can’t reset our knowledge with every generation (that would be a wonderful waste of time and potential). And we still use writing for that development and progression today…except for the fact that we also pollute the well with bad, unwise, and sometimes irrelevant writing (fake news, Twitter psychobabble, clickbait, some of you may even say this post).
So, just as “more information” with “more ways to access it” does not necessarily mean we are “more knowledgeable” we should be cautious in our next step, especially since we are working on different timescales, namely, that we are moving incredibly faster. Therefore, to think that “more technology” will make us “more secure” – especially if we start to sacrifice basic Internet survival skills…like being able to identify a spearphishing attack…because that had no influence on aaanything in 2016, did it? – we run the risk on having a long-term problem that we may not be able to untangle ourselves from so easily…or ever.
Remember, the universe is not all happy-happy-joy-joy and its current 7,000,000,000 inhabitants don’t all play for the same team.
So, on that note, on the topic of big numbers, here are 47,000,000 reasons to smile. Sometimes you think something may not happen, but it does, and catches you totally off guard, because of your strict belief that it will never happen (okay, it’s a cartoon, but you get my point).
See you in the next post, where all those fancy and techy cybersecurity jobs everybody wants today are gone within a generation! 🙂