If you are reading this, then you are probably one of the 3.8 billion people with a smartphone. This morning when you woke up, like 2.8 billion other monthly users, you probably opened your favorite social network, wished your old high school friend a happy birthday, checked the weather forecast, read the news, all while calculating the fastest way to get to the office. Subway breakdown, too bad Google didn’t foresee that one, you have to share it on Insta. Instantaneous, free, in the palm of your hand lie infinite possibilities. Before going any further, we just wanted to remind you what’s behind your new LCD screen.
Artificial intelligence and marketing: deciphering an economy based on profiling.
New technologies have allowed us to rethink our lifestyles, both professionally and personally. But while social interactions and access to services and consumer goods have been greatly facilitated, what is the place of your privacy?
Like any other internet user, you only aspire to have the best. And that, companies, and particularly Big Tech, have understood it well, even if it means sometimes, even often, to override their rights, concerning GDPR. From a mass marketing, companies have rethought their models to turn to a mass personalization. To understand how this personalization is made possible, we need to look at artificial intelligence. In the marketing sphere, artificial intelligence revolves around three main issues:
Combined together, these artificial intelligence solutions allow the creation of what activist Eli Pariser has described as “filter bubbles”3. Algorithms are now able to offer two people living under the same roof different results for the same search terms. The user then evolves in a bubble that will have been designed specifically for him according to a very fine profiling of his personality. And it is you, who without even realizing it, feed your virtual profile on a daily basis.
To understand how the internet giants are able to define your profile to an unparalleled level of precision, it is worth looking at the algorithms that make up what is called – sometimes too broadly – artificial intelligence.
In concrete terms, an algorithm is a computer program designed to respond to a defined problem according to precise operating rules. But what is really interesting to notice when talking about artificial intelligence is the “bottom up” approach that characterizes these algorithms.
Whereas just a few years ago, engineers approached a “top down” approach by setting up their machines very precisely to arrive at a desired result, today the machines are set up in such a way that they start from the results (of your data), to create models that are capable of making the desired predictions. This is called “machine learning“. Your data is collected through a platform (a social network for example), prepared and structured according to one or more analysis models (operating rules) that will then be tested and trained with the aim of achieving the desired purpose.
To put this concept into practice, let’s take the example of Facebook’smachine learning algorithm. In order to determine your virtual profile, the algorithm will record and analyze all the information related to your use of the social network, from your profile data, to your friends’ data, to the number of clicks on different pages, to your likes and commentson posts and much more. Based on this combined information, Facebook is going to be able to push tailored content to your screen that might interest you. The more time you spend on the social network, the more information you provide to the algorithm, the better and more precise its knowledge of you will be, the more robust the filter bubble will be.
But while this very concept of profiling, defined in Article 4 of GDPR, poses a debate due to its intrusiveness into people’s private lives, what are the stakes of the ubiquity of such technologies in our daily lives?
While machine learning was initially reserved almost exclusively for web giants, any company can now take advantage of these technologies. As we witness the unprecedented growth of the technology, marked by both its speed and its ability to push the boundaries of the state of the art, it would not be prudent to overlook their tendency to absorb an increasing amount of personal data, and often at the expense of individual privacy.
Now in Europe, profiling for marketing purposes is in most cases conditional on the consent of the data subject (Article 22 of GDPR).However, how can we ensure that consent has been given in a free and informed manner even though the user has no real visibility either on the data collected about him, or on the recipients of his data, and even less on the way in which his data is cross-referenced, structured and analyzed.
While algorithms are often referred to as “black boxes” due to the lack of visibility they display, both to users and to their programmers, the issue of ethics in artificial intelligence is increasingly present in the debates of privacy organizations and legislators. Unfortunately the complexity of artificial intelligence technologies and their rapid evolution having far outpaced legislation in this area, the question of the control of data subjects over their personal data remains a subject that, despite avenues of reflection being considered, has still not found a concrete answer.
Although ethics and privacy in artificial intelligence are still in the discussion stage, privacy organizations, companies and legislators agree on a vision of a privacy-friendly artificial intelligencearticulated around 4 main pillars.
The first pillar is the concept of explicability of artificial intelligence. As we have seen above, reverse engineering of machine learning algorithms is almost impossible, as their creators themselves are in most cases unaware of what decisions the algorithms make and how these decisions fit together to achieve the expected result. Yet, as required by GDPR, any data subject who is subject to automated decision making has the right to obtain human intervention from the controller in order to challenge the decision made. How can we demystify algorithms to give this right back to data subjects?
The second pillar, often debated, is that of transparency (read the press kit).While many companies now have access to artificial intelligence technologies and are extremely fond of user data to train their algorithms, it is common for users are not informed about the collection of their data, its storage in data lakes and its sharing with the data controller’s many business partners. How can we ensure that the data used by the algorithms has been collected in a fair, lawful and transparent manner?
The third pillar on which the think tanks are relying is risk assessment, particularly those related to potential biases in the design of the algorithms and in the data sources that feed them. A biased system is likely to have severe consequences on the people concerned; consequences that are difficult to reverse in the context of the black box that we defined earlier. How can we anticipate the risks in order to effectively fight against the biases in the design of algorithms? How can we refine data collection and limit it to only the data relevant to the creation of models?
Finally, the fourth pillar, which allows us to guarantee that artificial intelligence remains respectful of people’s privacy, is the ability to audit it on a regular basis. The aim here is to ensure that the ethical and regulatory requirements in this area are respected. However, without our first concept of explicability of algorithms, these audits remain difficult to carry out in practice. What criteria should be taken into account to evaluate the respect of confidentiality in algorithms? How can we trace the path taken by personal data from its collection to the display of an advertisement on the screens of the data subjects?
Although artificial intelligence technologies have experienced impressive growth in recent years, this growth has largely outpaced their control. While algorithms are able on the basis of the information collected, often without your knowledge, to define your virtual profile in an extremely precise way, the issue of privacy is more crucial than ever. Explicability of algorithms, transparency, anticipation of biases and regular audits are all avenues of reflection that will allow for the advancement of technology that is ethical and respectful of your private life. In the meantime, keep in mind that when the product is free, then you are the product.
– Andréa Parisot
Come join us online for a webinar dedicated to “Using AI in Marketing Processes” on March 30, 2021 from 9:00 am to 10:00 am.
The webinar will be hosted by experts in the field:
Chafika Chettaoui– Group Chief Data Officer at Suez
Aurore Raingeard– DPO at Bpifrance
Anton Kisyelyov– Editor-in-Chief and Publisher Industrial Property Personal Data Protection, at Lexis Nexis