International Network Observatory

International Network Observatory

Global Strategy and Big Data

Big Data Challenges

Global StrategyPosted by Anno Bunnik 05 Jun, 2016 23:27
The founders of the International Network Observatory have published an exciting new book on the challenges of Big Data.

This book brings together an impressive range of academic and intelligence professional perspectives to interrogate the social, ethical and security upheavals in a world increasingly driven by data. Written in a clear and accessible style, it offers fresh insights to the deep reaching implications of Big Data for communication, privacy and organisational decision-making. It seeks to demystify developments around Big Data before evaluating their current and likely future implications for areas as diverse as corporate innovation, law enforcement, data science, journalism, and food security. The contributors call for a rethinking of the legal, ethical and philosophical frameworks that inform the responsibilities and behaviours of state, corporate, institutional and individual actors in a more networked, data-centric society. In doing so, the book addresses the real world risks, opportunities and potentialities of Big Data.

Big Data Challenges can be ordered on Palgrave MacMillan or Springer's website.

  • Comments(0)//

“Big Data and what it means for Society and Politics”

Global StrategyPosted by Anno Bunnik 26 Nov, 2015 11:32

Interview with Professor Andrej Zwitter – Rijksuniversiteit Groningen, Law Faculty. By Hugo van Haastert on 29 October 2015.

Big Data is a huge topic, how do you look at it from an International Relations perspective?

Big Data is an amorphous beast. Look at Big Data from a humanitarian perspective, for example. If you're running around in the city of Groningen you would consider your location data very personal. But if you're in an earthquake zone and you are covered with stones, you would want to be saved and you would want people to have access to your location data. It just turns around the idea. Another example is the value of data in a conflict zone: There the location data is very sensitive information. The need for privacy changes with regard to the context.

You look at Big Data from the perspectives of International Relations, Law and Ethics. What does Big Data do to the relationship between government and its subjects?

I am a lawyer by training and I am interested in ethics. Big Data growing into International Relations and legal and ethical questions become increasingly important. You can't separate the two. Law tries to model society into an ideal of how to best live together. Ethics tries to achieve the same thing. And now also Big Data provides opportunities for social engineering – called Big Nudging. This is when algorithms determine on the basis of Big Data on which stimuli a group would respond in the desired way. This poses a tremendous problem to the freedom of will.

There are of course other problems as well. Once algorithms make decisions for governments – it may only be that algorithms are used to analyze tweeds for trend spotting for example – this changes the relationship between state and its subjects. I think there is a fundamental problem when it comes to the rule of law in particular. When state entities make a decision, the recipient has a right to a legal explanation, so reference to the law and reasons why the decision applies to that law – this a principle of legal certainty. If you have an algorithm making that decision, you do not know what the evidence base is, you might be able to point to the law, but you have no idea why a decision is made. This question of legal certainty and transparency is juxtaposed by an increasing automation - this shakes the fundamentals of rule of law. There is growing literature and lots of disputing opinions about whether algorithms should be making decisions, particularly in politics and governance. For example, Linnet Taylor (Senior Researcher at University of Amsterdam) just wrote a blog post why people should make decisions and not algorithms.

The only legally viable option will be to accept that eventually it still has to be the human person that makes the final decision and that oversees the evidence. The principle of rule of law also entails that you have the right to face your accuser. You can't face an algorithm. You also have the right to be told the reason behind the decision. It's a question of legal certainty. You have to know what you've done wrong, so you won't make the mistake again in the future. If you have the algorithm deciding, many aspects that we take for granted within the concept of the rule of law won't be possible to realize any more. So I think the only consequence is machine-supported decision making rather than machine decision making.

Do you consider profiling to be problematic, with potential risks of discrimination?

Yes and no. We have the tendency to assume that as soon as race or ethnicity are registered or mentioned, that this already constitutes discrimination or that there will be discrimination because of it, I don’t think this is true. In most cases we won't even know what kind of profiling happens in the black box, because machine learning choose their own indicators for certain clusters of people. If ethnicity will be a factor in the algorithm, first of all, you won't know in many instances, and secondly, it's not necessarily discrimination. Discrimination is the different treatment on the basis of a certain characteristic. If you're treated the same on the basis of the same attributes there is no discrimination. It might also not have to do with ethnicity but underlying socio-economic structures that push people into certain behavior.

Don't be quick to assume that discrimination happens automatically with algorithmic decision making. Be wary that it might happen. When in fact algorithms based on Big Data result in a tendency that shows a bias towards identifying a specific group, this sometimes might also point to structural problems within society. Certain groups are de facto marginalized in socio-economic terms. If algorithms pick up on that, it is not the algorithm that discriminates but the society that did it already beforehand and the algorithm has only made it more visible.

How do you consider Big Data from an ethical point of view?

There are several developments. One the one hand, the whole cyber discussion has led to the conclusion that the individual has become more potent in any relations, be it vis-à-vis the state or corporate actors or more generally international affairs. For example, Anonymous have started a war with ISIS online. They're meddling with affairs of the CIA and GCHQ. For example, beginning of 2015 Anonymous was trying to bring down all the social media accounts of ISIS. The CIA and GCHQ were, however, quite unhappy with this development, because secret services were using these social media accounts to monitor ISIS activity.

On the other hand, with the emergence of Big Data the shifts in power that have emerged are not mono-directional. They are quite complex. The individual becomes less important when it comes to it being a source of data. What's interesting is not what you are doing as an individual, but your behavior as part of a group. Tendencies and trends within groups. Also, Big Data analytics is generating forecasts about tendencies for groups of similar people. What would a person like you do in a certain situation? To Amazon it is not really interested in what you are doing as an individual in isolation. It is interested what you as belonging to a specific group of customers are doing and interested in so it can give you tailored shopping suggestions: people like you have bought this product after buying that other product. So you're treated as a category. The individual becomes less important.

The next development that we see with Big Data is the change in power relationships. Information is valuable, information is power and networks are power. If you have power over networks, like Facebook has the power over all data produced within its network but also the power to turn off the network, you are very powerful. At the same time, people in this network have better abilities and opportunities to connect through these networks to others than people who are not connected. So there are different perspectives on Big Data and power relationships that put emphasis on network based power. At the same time data is also a raw material and in large quantities data becomes valuable particularly if the data is relational as for example data from social media.

So some power relations have been shifting away from states and from individuals and have been moving towards corporate agents. Given other developments like globalization, the mobility of capital and tax competition between states, transnational corporations develop strategies to ease the burden of their operations by trying to escape regulation. States used to be able to cope with that escape through coordinating with other states or within regional organizations, such as through the EU. But now with the emergence of cyber and Big Data, this evasion has gone global. This global reach cannot be regulated properly. Corporate actors can escape international regulation by placing themselves in jurisdictions that are favorable to their needs and still operate globally. That is a shift of power towards these corporate agencies and away from governments.

The extent and proficiency with which governments can collect and analyze data has been surpassed by companies like Google, Nielsen, Facebook, and other companies that collect data as a business. Even intelligence agencies buy some of their data intelligence purposes. It is simply very costly to collect, store, format and prepare and eventually analyze the data. But by buying the data you become dependent on intermediaries and you don't know if the information is true. You have to trust them – and trust is a power relationship. The person you have to trust has power over you. There have been tremendous power shifts that have gone unnoticed because we still look at power through traditional lenses of money and military power.

In political science the theory of pluralism contends that power in Western democracies is shared amongst many groups. This allegedly made our societies more democratic. What is Big Data doing to pluralism?

I think there is a threat to democracy and it lies in the question to what degree information is available. If it is Google defining the results of what you see when you search for a certain term or a government does not make a big difference. For example, an Austrian company supplied a certain algorithm to China that filters search results in favor of specific political viewpoints. Now, if you Google Tiananmen Square in China, you'll get hardly any search results about the massacre of Tiananmen Square protests of 1989, whereas if you google that square in the West, you'll get quite different information.

What you see on Google, the information you get is what limits the capacity to democracy. That's a question of access to information but also of computer literacy of the user. Every government has always tried to shape the discourse of history to what is politically accepted. The history books are always what is most disputed between countries. With the emergence of these powerful search algorithms and Big Data we've been given great access and great tools to become more informed citizens on the one hand, but at the same time the ability sneaks in through these algorithms that companies and governments again define the information that you get.

Never before in the history of mankind has there been the possibility to interconnect with likeminded people like there is today. There is the possibility of phenomena like the Arab Spring, Occupy Wallstreet etc., which were made possible by the internet. Connecting with each other gives power to the people, which is inherently democratic.

But at the same time these tools are tools of control. Who controls these networks. When the protests started to emerge in Tunisia and Egypt, Twitter wanted to conduct maintenance of their Twitter service and take down the service for a few days. There are rumors that president Obama placed a call to Twitter to tell them not to turn off Twitter. Even if these rumors might not be true, it's still powerful image and nice thought experiment that we can conduct.

The power over networks has become particularly important. What does that mean for democracy? We as people have been given stronger tools. These tools are not made for our current legal structures that form the state. The structures of today are a different set of laws. Algorithms become the new laws that regulate our behavior and they are not necessarily compatible. It might lead to more democracy, but we need to have a balance between our old laws and what Obama called Democracy 2.0.

The US presidency introduced in 2009 the “Citizens’ Briefing Book” – a sort of online polling system where people could bring forward suggestions for new laws. They dismissed this e-polling system, because the most important thing on the agenda was the legalization of marihuana. Interestingly enough, now the legalization of marihuana is taking place all over the USA.

I think it demonstrates that there is a transition that we haven't successfully made yet. There is a danger to the state that might stem from democracy itself and that's an interesting development. There is, however, the continuing danger that we are being deliberately misinformed and manipulated. That is not something new, that has always happened, censorship always was and still is a tool of the state. However, now this power shifts also to other actors, namely transnational corporate actors, that might surpass states in their power to define opinions.

How do you consider the advent of the Internet of Things and the possibility for 'precision management'?

I am involved in a consortium of several universities (Cambridge, Leiden, Groningen) and other organizations that currently work on a Big Data solution for the refugee crisis. The humanitarian needs are huge and urgent. Big Data makes it possible to develop tools such as a sentiment map of how people are feeling about refugees, but also where job opportunities are and which people possess which skills. Big Data can help match those and thus help place refugees in supportive environments where their skills are needed and where their children have access to education. Governance of these issues will become much more efficient, also in humanitarian action, security and development aid. But there is still a lack of awareness of what is possible and what the risks are. Newspapers often feed the fear that Big Data is the new Big Brother. It might be true, it might not be.

It reminds me of the American science fiction author Isaac Asimov. He wrote the book Foundation, which is about the concept of psychohistory. It's a combination of psychology, insights from social sciences but also economics and it's basically a prediction tool. He was foreshadowing Big Data in the 1950s. The promise of psychohistory as of Big Data is to be omniscient and to be able to predict and even prevent certain events in the future from happening.

We might be able to micro manage in advance and prevent certain things from happening. For example, the dream of predictive policing – empowered by Big Data analytics – is to prevent crime before it happens, much in line with the Minority Report paradigm. But once you prevented a crime, you don't know if the crime even would have taken place. This is the alternative history problem. In order to make this prevention or preemption of a crime acceptable to an audience you need proof of the immediacy of the threat that gives the impression to people that actually prevented something. In the opening scene of Minority Report, Tom Cruise intercepts a murder just when the jealous husband is about to stab his wife with a knife. If they would have captured him two days earlier, the murderer would have been picked up from work. The audience would have raised their very critical doubts about whether he would have every committed a crime. Also if crime numbers are dropping due to such an approach, is it because crime prediction works or because people are simply getting more afraid of being caught. In essence, the legitimacy of such a system would crumble.

There are also legal problems. There is the presumption of innocence: nulla poena sine lege (no punishment without law but also no punishment without a crime. Either we have to reformulate the principles of criminal la or we won't be able to implement these kind of Big Data preventions that predictive policing is hoping for.

Is there enough public debate about Big Data and its consequences?

I'm very concerned about this. We have moved far into the fifth domain as cyber space is often called. It has been creeping into our society so rapidly, we haven't had time to respond to it in our academic work, our thinking and our political action. It is the clash that older people, like myself, experience when someone is talking with me and types on her cellphone at the same time. The younger generation are on the phone all the time and they are used to that and find it acceptable. That is only one of may indicators that shows us there is a societal development. Some of the developments are simple a clash of culture, other developments are a shift in power relationships. Much of it we don't understand because people lack the knowledge and insights of what is happening with Big Data, the power that unfolds and the impact it has on their lives without them noticing.

Much blame goes to sensationalist media which only writes about accounts of when Snowden reveals privacy violations by NSA, which is newsworthy because it's scary. Even many journalists don't even know much about the ways that cyber and Big Data impact on our lives, because there's a lack of technological interest and a generation gap as well. At the same time some blame goes to the governments that are ignoring this important aspect of daily life in the educational plans of their countries. We need strong educational measures to compensate for the changes that are occurring around us to prepare children, adolescents but particularly also the older generations about the ways Big Data and cyber developments are affecting society, politics and our human rights. Only then will society be more receptive to the problems and possible solutions.

  • Comments(0)//

Big Data Ethics

EthicsPosted by Anno Bunnik 17 Dec, 2014 14:01
by Andrej Zwitter, University of Groningen – December 17, 2014

Big Data and associated phenomena, such as social media, have surpassed the capacity of the average consumer to judge the effects of his or her actions and their knock-on effects, as Facebook parties and the importance of social media for the Arab Spring vividly demonstrated. We are moving towards changes in how ethics has to be perceived: away from individual decisions with specific and knowable outcomes, towards actions by many, often unaware that they may have taken actions with unintended consequences for anyone. Responses will require a rethinking of ethical choices, the lack thereof and how this will guide scientists, governments, and corporate agencies in handling Big Data.

Big Data versus Traditional Ethics

Since the onset of modern ethics in the late 18th century, we took premises such as individual moral responsibility for granted. Today, however, it seems Big Data requires ethics to do some rethinking of its assumptions, particularly about individual moral agency. The novelty of Big Data poses some known ethical difficulties (such as for privacy), which are not per se new. In addition to its novelty, the very nature of Big Data has an underestimated impact on the individual’s ability to understand its potential, thus make informed decisions. Examples include among others, the “likes” on Facebook sold to marketing companies in order to more specifically target certain micro-markets; information generated out of Twitter feed based sentiment analyses for political manipulation of groups, etc.

In a hyper-connected era the concept of power, which is so crucial for ethics and moral responsibility, is changing into a more networked fashion. To retain the individual’s agency, i.e. knowledge and ability to act is one of the main challenges for the governance socio-technical epistemic systems. Big Data induced hyper-networked ethics exacerbate the effect of network knock-on effects. In other words, the nature of hyper-networked societies increases and randomizes the collateral damage caused by actions within this network and thereby the unintended consequences of people’s action.

New Challenges

As Global Warming is an effect of emissions of many individuals and companies, Big Data is the effect of individual actions, sensory data, and other real world measurements creating a digital image of our reality, i.e. “datafication”. Already, simply the absence of knowledge about which data is in fact collected or what it can be used for puts the “data generator” (e.g. online consumers, cellphone owning people, etc.) at an ethical disadvantage qua knowledge and free will. The “internet of things” and ambient intelligence online further contribute to the distance between one actor’s knowledge and will and the other actor’s source of information and power, as well as it strengthens the dependency on the delivery of services dependent on Big Data. Furthermore, the ownership over Big Data leads to a power imbalance between different stakeholders benefitting mostly corporate agencies and governments with the necessary knowhow and equipment to generate intelligence and knowledge from data.

In the sphere of education, children, adolescents, and grown ups still need to be educated about the unintended consequences of their digital footprints (beyond digital literacy). Social science research might have to consider this educational gap and draw its conclusions about the ethical implications of using anonymous, social Big Data, which nonetheless reveals much about groups. In the area of law and politics, political campaign observers, law enforcement, social services and lawyers will increasingly become data forensic investigators to utilize Big Data themselves and to recognize the illegal exploitation of the possibilities of Big Data.

A full open access version of the paper has been published in Big Data & Society.

  • Comments(0)//

Big Data and the Binding Idea of the State

SecurityPosted by Anno Bunnik 20 Oct, 2014 17:01

Big Data and the Binding Idea of the State

Although the potential negative implications for Big Data use and misuse are potentially more significant on the grand scale than the opportunities, when considering Big Data use in relation to governance and security it is easy to forget the positive opportunities that inhabit the periphery of the discussion.

Facilitating a break out discussion group at an event last week (Big Data 1.0 + 2.0 @ FACT, Liverpool) on this topic provided an insight into how individuals from various fields and sectors perceive the implications of Big Data. Predominantly these were negative.

The fearful and pessimistic outlook of the group seemed to envision Big Data as a wedge that will be driven through society by large corporations and governments. These forces would seek to monopolise data and consequentially augment greater societal division, based on class, race and demographic classification, possibly leading to isolation and potential social unrest on a scale not seen before.

In contrast to this bleak outlook, is there opportunity for Big Data utilisation to do the opposite and actually enhance the factors that bind citizens to the state and create a national atmosphere of optimism and unity? Hypothetically the health service, law enforcement, revenue and customs and other public sector organisations, which are so scrutinised by society, become more efficient and cost effective if they use Big Data as a tool for clarity and best practice. This in turn reduces the economic burden on the tax payer, i.e. the general public, easing distrust and increasing public morale. An increase in trust in the public services could dramatically encourage positive social change at all levels, especially with the disaffected and isolated members of society.

This of course is feasible only if the public sector harbours a transparent approach to data sharing and Big Data use. The Data Sharing Open Policy Process (see is an encouraging sign that this is the approach the UK Government might take. The Cabinet Office led open discussion on developments in this field is overseen by public representation and offers citizens the opportunity to contribute. There is of course no guarantee that this transparency will transcend all sectors but taking an optimistic approach on the matter can we imagine that the future positive implications of Big Data outweigh the negative?

The conceivable impacts that Big Data exploitation will have on society are multifaceted. The negative implications will undoubtedly manifest in varying degrees of severity and the correct oversight needs to be put in place now to ensure the most effective contingencies. The positive connotations however present opportunities that cannot afford to be missed and require attention to avoid a knowledge and capability gap that could be as significant as the ethical and security threats associated with Big Data monopolisation.

Public consciousness of these implications needs to be raised through engaging events such as Big Data 1.0 + 2.0 (FACT, Liverpool 9th October 2014) and the forming of unfamiliar networks and partnerships which breach interfaces to prevent organisational isolation. Transparent and informative processes need to be in place to seize the opportunities of Big Data, found the knowledge and skills to avoid the capability gap neutralise further threat.

This column was written by Mr. Robert Barrows (CASI, Liverpool Hope University). Follow his blog

  • Comments(0)//

Intelligence and Ethics

EthicsPosted by Anno Bunnik 31 Jul, 2014 14:13

Despite negative headlines there is genuine narrative about implementing ethical concerns within the Intelligence Community.

A keynote lecture by Professor John Grieve, Ethics and Intelligence, was an encouraging example of the presence of genuine ethical concerns inside the UK Intelligence Community for a number of reasons.

Firstly Professor Grieve, as a leading figure in the sector, is an advocate of lifetime command accountability i.e. he is prepared to answer questions on command decisions he made at any point in his career – providing nobody has shredded the paper work! This is a principle to admire and should be followed by those in leadership positions not just within the police but the military and other sectors where decisions are being made that have collateral effects.

Secondly, Prof. Grieve promotes the duty to learn. Alluding to literary great John Steinbeck’s Sea of Cortez and a host of philosophers he shows how an open mind and consilient approach to policing will produce the most ethical and best practice for intelligence.

Third, the consilient approach to Intelligence Led Policing is what innovative networks of academics, industry leads and professionals are practising. Ethical concerns are at the core of these partnerships – with representatives from unfamiliar backgrounds collaborating, more bases of ethical concern are covered.

Finally, Prof. Grieve brought attention to ethics in the work place itself. Fairness at work, or FAW, is a major concern of his. Treating employees i.e intelligence analysts and contractors etc. with the respect they deserve and noticing problems that may lead to dissenting behaviour will ultimately result in fewer whistle blowers. Although whistle bowing is an activity which forces transparency in organisations, there are plenty of avoidable cases which add to negative press and a feeling of public distrust for intelligence practices. It is public consent building which will allow greater freedom and most effective practice for the intelligence communities – in the interest of reducing threat, risk and harm for society.

This column was written by Mr. Robert Barrows (Project Administrator at CASI, Liverpool Hope University. Follow his blog at

  • Comments(0)//

Big Data and Privacy Working Group - a promising review?

Global StrategyPosted by Anno Bunnik 04 May, 2014 15:49
This week, the White House Big Data and Privacy Working Group Review led by John Podesta published its findings. This highly readable analysis on the "social, economic, and technological revolution" serves as a very interesting state of play on Governing Big Data.

Part of it deals with the very obvious merits of Big Data for, notably, health care and the economy. No news there.

It gets interesting when the report tries to formulate a policy framework for management of Big Data - especially by signalling the threats associated with Big Data for citizens and consumers. It is somewhat reassuring to note that the most influential government of the world is aware of its potential for discrimination and a (potentially dangerous) readjustment of the relationship between the state and its citizens.

Also, privacy watchdog ACLU seems quite excited about its findings, which seems reassuring for those worried about if privacy as a concept would survive the information age.

However the policy recommendations are largely geared towards national solutions, such as protecting data gathered in the classroom for education purposes only.

There is still a lot of work to be done to come to efficient and legitimate policy frameworks, especially on a global level. Who are going to be involved in this process and which values are taking a primary stage in this regard? What will the relationship be between the creators of data (i.e. citizens and consumers) and those that use it (i.e. governments and corporations) for their own benefit?

These and many, many more questions remain unanswered. We can only conclude that the ramifications of the information age are only just beginning to dawn on us.

To be continued.

  • Comments(0)//
Next »