Data above gold - Data mining

While the 19th century is typically associated with gold mining and the resulting gold rush, the beginning of the 21st century will be known for data mining. Gold diggers do not have to move from state to state, they can work from the comfort of their own homes. And while an ounce of gold can be touched and weighed, we cannot touch data. Still it is one of the most valuable commodities of our time.

Most people already know the rule of thumb that if you do not pay for a product provided by an Internet service, it is because you are actually the product. How to understand this? Take, for example, social networks that we use daily: Facebook, Instagram, WhatsApp or TikTok. We do not pay for any of them, but when registering, we agree to provide a certain amount of personal information. And that is the price.


 Does the Black Mirror sci-fi series ring a bell?

Or Cambridge Analytica? While this first is (at least for now) mere fiction, the Cambridge Analytica case has fully revealed how data mining affects, among other things, politics and the decision-making of us all, the electorate.

The fundamental problem with data mining is that as users we are rarely aware of it and we often have no idea how the data is handled further, or who has gained access to it. For social network operators, data is the most valuable item they can trade – this is how they make their living. Our ignorance allows them to sell our data to companies that use it to phrase their offer in such a way as to draw our interest.

The usual disadvantages related to the collection of personal data are intentional and unintentional data leakage and the use of data for various dishonest activities – from sending spam to blackmailing. Apart from this, there is also the fact that social networks monitor the behavior of users even when they are using other websites – that is, browsing on the Internet.

Based on this behaviour, they then build tailor-made marketing models, which can be misused in politics, especially in a pre-election duel. This was also the case of the aforementioned Cambridge Analytica, to which Facebook was sending the information and data of its users. Cambridge Analytica was then able to target Facebook users in ways that are by the professional public considered controversial, to say the least. Examples include the Pro-Brexit campaign in the UK and Donald Trump's 2016 US presidential campaign.

When robots put on a uniform

The automation of warfare is nothing new. From machine guns to precision-guided missiles to drones, this trend is unstoppable and will continue. Thanks to development, soldiers can fire much faster, can destroy targets over long distances and with a minimum of collateral damage, and control warplanes from the other side of the world. With the rapid progress of technological advancements, more and more tasks can be delegated to computer and robotic systems, no longer just charging or navigating. We are gradually coming to terms with the fact that not only are these systems as capable as humans, they often exceed human abilities. This is why there is an increasing number of areas where a human can be replaced by a machine – and the battlefield is no exception.

Where is this trend going, and what will the wars of the near and far future look like? And what happens when our weapons are not only autonomous, but truly intelligent? Will we fight with them side by side, or against them when they rebel?

Faster, safer, more accurate. The advantages of these fighting means are clear. At present, the most famous, most widely used and most visible ones are undoubtedly unmanned combat aerial vehicles, or drones. These can be controlled by a pilot from the safety of the base, so when hit, the pilot’s life is not endangered. Moreover, the pilots on the ground can take turns, so the aircraft can stay in the air for dozens of hours. At the same time, the aircraft is not limited by having to carry an actual pilot who needs a seat and control panel, has to breathe oxygen, and whose body can withstand only limited overload during difficult maneuvers. However, the drones deployed so far are still little more than remote-controlled aircrafts.

This too is changing and the new generation of drones are able to operate completely independently and do not need any human assistance to carry out tasks. They can refuel in flight, attack a target, and land on an aircraft carrier in fully autonomous mode.

However, movement in the air is relatively simple compared to moving on solid ground. We still must wait for fully autonomous robotic soldiers who will be able to operate in cities or in the field, but even in this area, progress is unstoppable.

However, we are getting ever closer to fundamental moral and ethical questions which we still cannot answer. First and foremost, can we or should we entrust the machines with the decision-making in the life and death of humans? At the moment, it is usually a person who decides about the use of deadly force, or at least confirms or somehow approves the launch or detonation of such a weapon. The weapon can find and target the object on its own, but it waits for a human decision about whether or not to destroy it. But this solution is not always practical. For example, in the context of missile defense, there may not be enough time. It is a matter of seconds, so before a human makes a decision, it may be too late. Or the communication between such an autonomous weapon system and the control centre may fail or be jammed, in which case no confirmation would ever come. However, entrusting these decisions to machines risks failure, firing into our own ranks, losing control, dehumanizing war, or the rapid and uncontrollable escalation of conflicts. These are the current burning questions, and we don't have much time to find the answers before fully autonomous robots begin to participate in battles.

Don’t believe everything you see!

Deepfake is a technology which can exchange one person’s face for someone else’s using artificial intelligence, most commonly neural networks. As a rule, they use faces of celebrities because artificial intelligence needs a lot of photos or videos to begin with – and those of celebrities are all over the Internet. This phenomenon has been the centre of attention since 2017, when the first pornographic and later political deepfake videos began to emerge. Since then, there has been an on-going debate about how harmful this phenomenon actually is.

While it might seem that it was modern technology that brought about the ability to alter video content, the opposite is true. Video creators have been manipulating the content virtually from the moment it became possible to capture reality on the film strip (leaving aside the photos that are spreading like wildfire – see, for example, President Trump’s “diet”). We are not necessarily talking about trick sequences in old films, even though truly magical (such as A Trip to the Moon by George Méliès, the father of special video effects).

Around the end of the 1930s, artificial content appeared even in news films that were shown at cinemas. Because of the lack or unavailability of the image material, a substantial part of the film had to be made in the studio. After all, it does not have to be a manipulated video, just content taken out of its original context. Most of the time, we want to see what we have long believed, and so even a slight nudge when interpreted in the right direction can lead to manipulation.

In the Czech Republic, we encountered a striking example of a manipulative video in 2007, when the Ztohoven (Out of shit) group hacked a broadcast of Czech Television with a video of a nuclear explosion in the Giant Mountains. This never happened, of course, but the footage still provoked at least outrage, if not outright panic.

If videos have been manipulated for so long, why be afraid of deepfake now? The problem is that the possibilities of producing deepfake videos have quite expanded, and the reach of such videos can be enormous. We are not talking about pornography, but about politics. It is possible to create a video in which a politician can be saying practically anything. Or (s)he does not even have to be speaking at all. (S)he may just appear in an unexpected context. In theory, such a video could cause major problems. Just imagine a deepfake video with, for example, the American president insulting the Russian one. Can we protect ourselves? We cannot rely on technology just yet. Although there are attempts at writing algorithms revealing deepfake videos, the creators of these videos are still one step ahead. This is because algorithms learn from other algorithms. It is a never-ending cat-and-mouse game.

The very best defense strategy is to accept that not everything we see in the media is actually true. We must be even more distrustful of the Internet. We have to use critical thinking and approach any media content we consume with caution. Caution must be exercised all the more when shocking content comes from an unknown or unverified source.

You are running an old browser version. We recommend updating your browser to its latest version.