economyFrance
AI for Propaganda
Part I of Tech for Evil: the large quantity of data that entities collect of us can be used to build complex behavioral models that uncover patterns of behavior that even we are unaware of. This can be used for propaganda in order to subvert a democracy.
Published by Dr Jiulin Teng on 22 Dec 2021
As 2021 comes to an end, I plan to write two short posts on the subject of Tech for Evil. This overall theme may be disconcerting to many, but as the underlying technologies become mature it is unavoidable that they will be used by some to do evil. Here, I choose to omit the obvious: surveillance, digital currency, and “social credit” systems pose transparent dangers to liberty. I will instead cover two of the most insidious areas: AI for Propaganda, and VR for Dehumanization.
Disclaimer
Before I begin, I must assure readers that I have no information, direct or indirect, on efforts by any entity to use tech for evil in the manners that I describe. I make no accusation in this post, and any relation between the methods described in this post and reality is coincidental.
I should also like to point out that falsehood is never censored; rather, it is the inconvenient truth that often see the barrels of a loaded gun.
Behavioral Data & AI
An aspect of AI that many, especially laymen, do not appreciate is that data is a prerequisite to train statistical models, while rule-based models are not bound by this constraint. In reality, rule-based models only work well on relatively structured tasks. We can consider these rules as switches and relays in an electric circuit or valves and tanks in a mechanical system. They can be quite useful for automation, but they are limited by the rules in two ways: 1. The designer must already know the rules. For example, someone may consider males between 20 and 35 with three misdemeanor charges to be high risk individuals. 2. The amount of rules must be tractable. When we have hundreds of rules, each with a dozen branches, the number of rules to be coded is literally higher than there are atoms in the universe.
To model human behavior, I reckon that statistical models are the only appropriate approach. Considering how complex behaviors can be and how many inter-related environmental inputs there are, obscenely large quantities of data is required.
Fortunately for some, and unfortunately for many, technology has provided unprecedented tools to collect behavioral data. I list a few here, though I am certain that tools that I am unaware of exist:
1. Smartphone & Peripherals. Almost everyone is walking around with a “smart device” in their hands today. These devices collect all data and send them “home”. With connected peripherals such as “smartwatches”, it is able to track where you are, what you do, whom you meat, what you think, how you think, what you say, and, while doing these things, what your physiological state is: Are you nervous when you lie? Is your relationship real?
2. Social Media. Not all entities can obtain all aspects of data from your “smart devices” to link them together. Usually, they may not need to. You volunteer so much data on social media that little work is required to piece them together.
3. Search Engine & Web Browser. It takes a bit more to interpret your search and browsing history. I would venture to say that the industry leader is doing a poor job on this front even in making money for itself. Still, the more data you share, the more entities can use them to improve their models.
AI for Propaganda
With behavioral data, entities could implement AIs for their own commercial interests. Sometimes, this is about gluing you to their services (games, apps, tools…) to make you spend money directly or click on ads that they display. Sometimes, however, such AIs can go much, much deeper.
At a time when extraordinary events have become frequent occurrences, when opinions on a large number of issues are strong and divergent, entities could link other behavioral data to how an individual acts and reacts. For example, when an entity tells a lie, it is able to observe how each type of individuals react; this allows the entity to tweak its models continually, until lies can be told in such a way that they can widely accepted as the truth.
As such, the lines separating democratic institutions and commercial interests can be wholly dismissed without much protests, which can be crushed violently without repercussion. Meanings of words no longer matter, insofar as the public can be made to believe that what they thought they knew to be true are false and what they thought they knew to be false are true.
While I cannot presently provide an example of this type of AI for propaganda used, I have no doubt that, as I stated in the opening, the technologies that make it possible are quickly maturing. It is not a question of if but when it will go live. An interesting aspect is that few would know that AI for propaganda is fully functional: they will probably be labeled conspiracy theorists or domestic terrorists.