BLOG

Data Privacy Is Crucial to Fighting Disinformation! Let’s See Why

Disinformation is one of the most crucial features of the times we live in. Every day, scrolling through the news, we put effort into detecting what’s true and what’s not. We’re surrounded by fake news on a daily basis. Realities permeate and we lost control over this fact a long time ago. “Liquid Modernity” is sociologist Zygmunt Bauman’s term for the present condition of the world. He contrasts it with the “solid” modernity of the past. Nothing is sure, everything is liquid, information flows. The uncontained collection of behavior data concerns privacy advocates. But not only them. “It continues to raise attention in our community,” said Greg Touhil. He is a cybersecurity consultancy Appgate Federal and a retired Air Force brigadier general.  

What if we could actually fight disinformation? 

Research shows that we can inhibit disinformation by keeping our privacy guard up. How does this process work in practice? Tech companies can collect (when we buy online or like a tweet) and weaponize the data into influence operations. And those are harder to detect. And as we go about our daily lives, these pieces fuel the global disinformation machine. What adds to it is that data-privacy legislation doesn’t keep up with the threat. At least, intelligence community veterans, disinformation scholars, and academics say so.

2016 presidential election was an example of a population-scale disinformation campaign

The disinformation presidential election campaign led to some reforms by social media giants and aggressive steps by the U.S. Cyber Command. The 2020 election was relatively free of disinformation. It may reflect a pause. At the same time, there is still a lot to worry about. Adversaries shift to subtler manipulation based on personal profiles built up with aggregated data. However, efforts put into voting transparency played a strategic role. It not only helped the public understand and have faith in the electoral outcome — especially those who preferred a different outcome. It also held a secondary value: telegraphing a deterrence message to U.S. adversaries.

Researchers use Facebook Likes to predict sensitive personal attributes

Michal Kosinski and others argued in their 2013 paper, about these sensitive personal attributes. They include sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. 

Data Privacy Is Crucial to Fighting Disinformation. Let's See Why
Source

Joseph E. Brendler, a civilian consultant who worked with Cyber Command as an Army major general worries about these processes. “A dynamic that started with a purely commercial marketplace is producing technologies that can be […] used for the purposes of influencing the people of the U. S. to do things other than just buy products.” There are no appropriate forms of regulation. He adds that activating people who are otherwise observers to a political phenomenon is accomplishing an extreme shift toward greater political activism. It’s not generally a bad thing – he claims – but the extent to which it might produce a violent outcome is a bad thing.

Digital cloning

National security leaders struggle to predict broad social movements based on large volumes of data. At the same time, companies are great at it. They are great at anticipating individual behavior based on the “human source” that their customers provide. Now you can create a digital clone of a person. Having it constructed predicting his or her online behavior is a piece of cake. That’s a core part of the monetization model of social media companies. Yet, if fallen into the wrong hands, it would enable much more effective disinformation. 

Digital cloning VS data privacy

“Information & Communications Technology Law” warns that digital cloning raises issues of consent and privacy violations. Whenever we get the data to create the digital clone without the informed consent of its owner issues arise. But only when the owner of the data is human! Data created by computers or AI may not raise issues of consent and privacy. At least, as long as AI and robots are not deemed to have the same legal rights or philosophical status as persons.” Alina Polyakova and Daniel Fried say that social media companies focused on takedowns of inauthentic content. It’s a good step. On the other hand, it doesn’t address deeper issues of content distribution (e.g., micro-targeting). It can’t deal with an algorithmic bias toward extremes, neither with the lack of transparency. Authors suggest the U.S. government make several organizational changes to counter foreign disinformation. Authors of the research believe that the government’s policy is inconsistent. 

Home data privacy

Connectivity in the home and the addition of connected devices continue to expand year-over-year. The increasing levels of connectivity in consumers’ lives provide easy targets for hackers. They also present security and privacy concerns for consumers. As for today, to avoid adding to the global disinformation, we should keep our data at home and not let them escape into the cloud, or even further. Consumers become warier about data security with every breach, whether it happens at a credit card company, a store, or a website. Considering the frequency of data breaches around the world, it’s easy to understand data security concerns.

Source

Conclusion

Nowadays, we need a comprehensive approach to the risk generated by data. We should look at the process of data throughout that whole lifecycle of data governance. We shouldn’t focus only on the outcome either exploit the data without taking into consideration the whole process. It’s important to understand the risks associated with data collection for microtargeting. We should pay attention to the ramifications of aggregate data collection, even when it’s lawful. 

Want more information?

Subscribe to the Hummingbirds Newsletter for fresh information in your inbox every week