Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

December 09 2013

Overlapping layers of different meanings

The Internet as a global, transnational communication network makes it very easy for us to communicate with each other regardless of national borders and physical distance: All humankind becomes one, prejudices and misunderstandings disappear and we enter a golden age of peace, love and understanding. So says the utopia.

But even if we put all the existing harassment, mobbing and generally disgusting behavior which happens online aside, that perception is false.

The misconception could already be seen in the Declaration of the Independence of Cyberspace which stated:

Cyberspace, the new home of Mind […] the global social space we are building to be naturally independent of the tyrannies you seek to impose on us.

I have written about why the perception of the independence of the Internet is wrong but let’s look at it from another perspective. The Declaration of the Independence of Cyberspace was a utopian declaration of a small elite of scientist, artists and activists, so expecting it to scale to any real-world scenario would have been ludicrous.

While Google, Facebook, Twitter, etc as well as all those decentralized blogging/publishing platforms are available almost everywhere and are also used in many places (we sometimes forget the big local players in our vision of the global utopia) the people using them still have completely different backgrounds and terms, different contexts: The definitions of “Freedom” for example differs quite a lot when asking people from different countries and backgrounds. The view that many US Americans have on “the government” is very different from what people from France or Germany1 might think.

We might communicate on the same platform and we might – in spite of all the local communities speaking the local language and nothing else – find a language we can use to communicate cross-country, -timezone and -continent. But we are still stuck in the cultural imprint we acquired growing up somewhere. We carry this around with us, implicit, unexpressed and believe that the same signs (as for example “FREEDOM”) mean the same thing everywhere. But they don’t.

We are communicating globally but the messages we send tend to differ quite a lot from what we believe we sent: We falsely believe that other parties have the same context that we do. And this does cause problems when we take global debates and arguments into our local contexts. Often they no longer work.

Our different worlds overlap, they no longer border. We live in within layers of layers of layers of meanings, interpretations and contexts. And while we can navigate these often somewhat contradictory mashups of ideas it can disconnect us from local debates, can make our arguments look weird or wrong. Because we fail to recontextualize them, translate them properly.

And this is one of the most important skills for our digital future, I believe: To translate entities from one context to the next, and therefore connect the local debates of similar issues into one bigger global debate. Because even in the same network using the same language, we need to translate and adapt.

  1. just two examples

The post Overlapping layers of different meanings appeared first on Nodes in a social network.

flattr this!

December 08 2013

Stickers on cameras

While many privacy technologies and schemes are too complicated or cumbersome to reach mainstream penetration (PGP, OTR and similar things) there is an approch that you can see on hacker conferences just as much as you see it on the machines of people who wouldn’t be considered tech-savvy: Small patches stuck on webcam lenses.

The beauty of the idea is its simplicity: You do not want someone to watch you via your own camera that is very often pointed your way so you just make the video unusable. Brilliant. Easy to explain, easy and cheaply to apply and providing hardly any inconvenience. Why can’t all methods be this simple?

But when thinking about it further a few very interesting questions emerge:

  • Your camera can only be used to spy on you by someone having access to your machine (via some trojan or backdoor). While you can block the lense, the attacker can still log your passwords, steal you secret keys, meddle with your personal data, impersonate you online using your locally stored cookies and do all kinds of harm. The patch does signify a whole lot of distrust against the computer at hand, so why use that computer?
  • While the patch disables the camera, it does not disable the built-in microphone allowing attackers to keep listening to the room your computer is in. If that is the actual threat model you are trying to avoid, why are you not doing anything against the microphone?
  • When seeing a patched-over camera I tend to ask people if they also taped their smartphones because those have cameras as well and are as vulnerable as your computer. I have yet to meet people who explicitly did. Why is that? Even if it resides into your pocket a lot of the time it still could take pictures as soon as you take it out (in fact my friend scy build a tool for exactly that use case)

The patch does not really create a lot of extra security: If your  device is untrustworthy enough for you to tape the camera, you probably should get a new machine. So the patches largely seem to serve two purposes.

The first one is a communication purpose. By taping your camera you communicate to the people around you that you are not going to take pictures of them and that you “do what you can” to not have their picture taken by your devices. It’s a statement to build trust (which could obviously also be used to betray said trust but that tends to be the problem with trust either way) and show respect. In this way it can make sense, especially given certain contexts (being within a very security oriented crowd or amonst people who face very real threats by government or others).

The second purpose is the feel good purpose. Security is hard, the world is scary and I feel helpless in front of the computer. The patch creates – as little as it actually helps – a sense and feeling of control: I take one tiny aspect of possible danger and literally patch it with a small thingy to make me feel better. It’s like the Saint Christopher’s Medal that some people stick into their cars to protect them of accidents: Does not really do anything but give you a warm and fuzzy feeling.

This little “security hack” shows how little it’s actually about security but about control. About the feeling of agency and power. And a little about communication. But not about security in the meaning that IT security consultants work with.

The post Stickers on cameras appeared first on Nodes in a social network.

flattr this!

Reposted byfinkreghsofias

December 06 2013

Transparency is the new objectivity

Objectivity is still considered one of the cornerstones of journalistic professionalism. The journalist is supposed to not take any sides and report on all issues without prejudice or opionion (unless it’s within the explicitly marked opinion areas of each publication).

There is a lot of value in this idea and it would really rock, if people were machines and the world was a very simple computer simulation. But people aren’t and the world isn’t.

Given any significantly relevant issue we are almost all stakeholders: Whether it’s taxes, healthcare, human rights or the question whether the Internet destroys certain businesses and if they and their business models should be protected, we are all part of the social system engaging in the discussion. And even if not every single individual is a (or considers themselves a) stakeholder, the bigger, more powerful entities of the public discussion certainly are.

So we cling to this idea of the more-human-than-human journalists who transcend all petty human emotion and write in true objectivity. Believe in it and therefore read texts as if they were objective. Which they almost never are.

That’s not the journalists’ fault: The task at hand, separating one’s rational mind from all the social dependencies of the individual, all the messy chaos of the world, is truely herculean. It’s so big a challenge that journalists almost have to fail.

We need to rethink what to expect from journalists. Why cling to an idea we know is hardly ever possible? Why pretend like journalists are not human beings with dependencies (social, financial, etc)? That’s a ludicrous idea!

What we need is journalists to be more transparent. To make their dependencies more obvious. Every article should basically have a short “the person writing this has the following relevant social/economical/etc connections”. You write about some topic and support someone’s perception? Cool. You are in a relationship with that person or that person signs your paycheck or is an old friend? Be transparent.

That doesn’t mean that journalistic standards don’t matter, that journalists shouldn’t be truthful. But they need to be aware that they are presenting their truth and help others see where their truth might differ from their reader’s.

Accepting that human beings are more than just rational computing machines (even if they want to be at times) is not just pragmatism. It is humane. It is a tiny step towards removing people from the whole rather insulting “you are just a functional piece within this economic/social structure” idea.

Humans are more than computers made from flesh more than robots made from bone.

(Addendum: The same is also often true for bloggers and writing in general)


The post Transparency is the new objectivity appeared first on Nodes in a social network.

flattr this!

December 02 2013

Neutrality is a sorry replacement for fairness

Net neutrality. Platform neutrality. Neutral media. Our civil society loves neutrality.1 The word neutrality is imbued with the magical vibe of things being nice and fair.

The idea of neutrality is beautifully simple: Given a platform or service, we call in neutral if it treats all service or platform users equally, if it does not discriminate against or favor certain groups (based on income, gender, sexuality, and whatnot).

And this has lead to a very simple dogma: If we treat everyone the same that is fair, neutrality creates fairness. But that’s incredibly wrong.

One of the main reasons is compartmentalization: We take a certain aspect of the world and define rules to make that aspect neutral (believing that this will create fairness) ignoring that the rest of the world is still horribly unfair. But all that existing inequality and unfairness propagates into the proposed neutral and fair new context: Let us think about net neutrality in its purest form for a second.

Net neutrality demands that a provider treats every packet, every piece of flowing data equally and just tries to push each one of them out as quickly as possible. Let’s say that all providers are perfectly neutral and just transfer data. Your provider just applies its resources neutrally to incoming data. But there are still a bunch of shared resources: Maybe your street shares an uplink or your provider only has one somewhat limited peering location with an interesting peering partner (say for example youtube): Customer BigShot with the JustGiveMeALotOfInternetICanPay package decides to stream a different HD Quality Youtube Video to all their 13 TVs. Cool. The data flows and saturates the uplink. Customer Unemployed with the IcanHardlyAffordTheInternet package now can hope to get a packet or two in between all of BigShots data. The inequality of the real world in form of different access to physical resources and money translates into the so-called “neutral” service/platform: The neutrality of the service/platform does not translate into fairness at all.

A similar case happens when you give every person access to the same music store to publish their stuff: You can put your track next to the newest works of U2 or Katy Perry, that’s neutral and fair, right? But you do not have access to the same social network as well as publicity power that those famous people and their publishers do. The publishing platform might be neutral but your song still doesn’t have a chance2.

We carry neutrality in front of us like a flag, believing that it will solve the issues of our times: If all platforms were neutral everything would be cool. That would work if we erased all money, all personal property, all skills, all knowledge and all social ties. If we had a new world of untrained, unconnected human beings starting fresh. Which is not a very reasonable approach.

We talk neutrality because it’s a simple discussion: Neutrality is a simple idea, its implementation can mostly be measured. It’s far from the complexity and mess that talking about issues of inequality and unfairness brings with it. But that is the discussion we need to be having.

It’s not about net neutrality, it is about how we can give everyone who wants it fair access to the Internet with all its glorious services and means of communication and expression. It’s not about platform neutrality but about the question how we can help those less powerful to compete or hold their own (in wherever context) against those in power. How can we make sure that the Internet does not just migrate existing power structures into the digital realm?

I don’t believe that the term “neutrality” really helps us tackle the relevant questions of today and tomorrow.

(Addendum: That does not mean that neutrality is a worthless concept or that net neutrality in general is a bad idea. It means that for examples such as net neutrality only help fixing one very specific issue while the bigger issue at hand, the issue that we should care about gets ignored. The problem is the equation neutral==fair.)

  1. especially in connection with its sibling “objectivity”
  2. except for the one runaway indie success every few months that then serves as a “you can do it example” even though it was just a freak accident just like winning the lottery

The post Neutrality is a sorry replacement for fairness appeared first on Nodes in a social network.

flattr this!

November 27 2013

Surveillance is just a symptom, the issue is control

In light of the recent developments “surveillance” is a really hot topic. Whether it’s about criticism, its potential uses or its decentralized cousin “sousveillance“, when reading about global politics, people’s liberties and rights you can hardly escape the world: Surveillance.

Depending on which agent within the public debate you ask, surveillance makes the streets safer, people its powerless slaves, it creates a utopia of security and prosperity or an oppressive dystopian dictatorship. Obviously the debate is more complex spanning a whole continuum of positions between the two extremes I just pointed out.

We see studies being created and thrown into each others faces about whether surveillance works or not (many experts arguing that surveillance is at least not a very cost-efficient way to get results [just look at how expensive the Utah NSA data center is compared to how they failed in the past in their "war against terror"] ). We see especially civil-rights activists arguing that mass surveillance infringes on human rights so massively (pun intended) that it is generally and absolutely unacceptable.

I believe that this whole argument deals with the wrong idea. Surveillance is a consequence, not a root cause, so we are fighting symptoms, not problems. The real issue at hand is the idea of control.

Governments don’t deploy mass surveillance because they like mass surveillance or data. They cling to the idea of being able and being expected to be in control. And we are far from innocent here.

Bad stuff(tm) happens and what is the first question the media/the public asks? “Where was the police?” quickly followed by “How can we make sure it does not happen again?”. It’s deeply human to do so. We hate bad things happening just as much as we want to feel safe and protected.

This puts governments in a very difficult position, because – let’s face it – the world is a messy and chaotic place. Billions and billions of people, each having their own opinions, agendas and ideas. Each one of them potentially able to cause a lot of damage. If you are a politician focused on security the world is your own personal hell.

So what do you do? You try to control as much as possible. Try to limit chaos and chance, try to make the world predictable. In come the technologists: The software engineers, the big data people, the scientists and the IT-consultants.

Their promise was simple: “Give us enough money,” they said, “and we can predict whose going to misbehave.” They promised control. And the politicians, driven by the public’s demand for control, bought into the sales pitch.

And that’s where we are now. Blaming governments for mass surveillance they implemented to meet our desires for a safe and comfortable life. And if we want to break that vicious circle of more money thrown at the surveillance apparatus because it just isn’t good enough when something bad(tm) happens, we have to start: By not buying into our own common fear of the chaotic and often somewhat alien world.

The mass surveillance state is just as much a product of “evil” politicians and companies as it is a response to our demands, our wish for a strong and powerful entity making our lives safe, replacing the parents whose lack of omnipotence we had to realize growing up.

The world is chaotic and absurd and weird and terrible and brilliant. It’s time we grew up and realized that, we as a public just as we as governments and people in power. And that would make not only mass surveillance history but also make sure that no other similarly dangerous strategies get implemented.

The post Surveillance is just a symptom, the issue is control appeared first on Nodes in a social network.

flattr this!

November 25 2013

23andme and tante

For a few months now I have been a customer at 23andme who offer genetic testing for a rather low price (about 100 US$). Their business model is to sell genetic testing at a loss but to use all the available data for their own scientific analyses: You might have heard about their patent for a product helping parents to select the traits of their offpring (that product is obviously not yet available and there is a lot of debate on whether it would be ethical to offer or use it, a debate we are not going to look at for this blog post).

I had heard about 23andme quite a while before I signed up. The 100$ were quite expensive when I was still a student so I shyed away from it. I also felt somewhat uncomfortable with the idea itself: I have always perceived this body as a cage, as the broken hardware my mind (software) runs on and is deeply impacted by in regards to what it can do, what it can perceive and when I have to stop doing what I want to do to sleep/eat or whatever. Accepting the reality of this body therefore always has been an issue for me: I hate being defined as this random sack of meat and water, I feel objectified when people refer to it as if it was “me”. This is one of the reasons I shy away from physical contact such as hugging: It has nothing to do with me not liking other people of finding then unclean or whatever, it is about me hating the feeling of being forced to accept and realize this body’s existence and power over me. On the other hand I have no choice but to live with it, with the way it makes people see me (as a white, male dork), with the way certain chemicals put into the body make my mind stop working. My body and me are not on the best of terms. These days me and the body I have live by a non-aggression pact.

But as much as I might want the world to be different, I cannot ignore the relevance of the properties of this body for my life, my mind and the amount of time I have. So I decided to spend some of the money I made on Flattr and ordered myself a kit. Which is surprisingly easy to use: You get a package with everything you need. You spit into a container, close it, put it into an envelope, add a few forms for customs (including one scary looking “yeah this is bio stuff but nobody will die, I promise” form). I called DHL and someone came to my office at work the next day to sent the sample to the US as quickly as possible. It was equally weird and fascinating to track the movement of that tiny container of my spit from Oldenburg, Germany all over the globe to LA, California. But I digress.

It takes a few days until the first results pop into your account: Your sample needs to be processed and the results have to be stored in the database. Then 23andme’s real work starts: Their computers try to contextualize the data you put in, tries to find (distant) relatives of yours, estimates what kinds of risks you might have due to your genetic setup.

One thing you need to understand when looking at 23andme results is that they don’t fully sequentianlize your genome: The process they use only looks at a few hundred thousand of the more than 10 million possible SNPs. But, while their data is not a full dump of all your genetic information (so no cloning yet), you can download the parts they sequentialized in a plain-text format for your own analyses.

Note: I publish my results here. Since my genes are shared (at least partially) with my parents and sister I did ask them beforehand. If you plan on sharing your genetic data (which is cool) please ask your parents and siblings for their OK. Also understand that your genetic data is now public and that future employers, insurance companies and the secret conspiracy of the reptilians will potentially have access to it.

After a week you get some results in your profile. Some weird, some interesting, some potentially scary. Let’s start with weird.

header 23andme and tante

My caveman-quota (click to enlarge)

So I am a little more “caveman” than the average. This does not really tell me much about this body, it’s just a weird little statistical detail. But this image already illustrates how 23andme tries to visualize and contextualize the information you get. Instead of a pure data dump they try to make it easier to understand.

Another similar information is about heritage: Where did you families originate? Through changes in genetic expression and distribution of certain genotypes 23andme can estimate your ancestry to a certain degree.

ancestry 23andme and tante

my ancestry (click to enlarge)

For me this wasn’t all that surprising, but I can imagine many people with a more heterogenous background who would find this rather interesting.

Obviously some aspects connected with genotyping can have grave influence on your life. There are many hereditary conditions and sicknesses, often without any real chance of healing. Personally I was very scared of potentially having conditions like huntington’s disease for example, a diagnosis which can really fuck up your day and the rest of your life. But still, I’d rather know, prepare, instead of not knowing. Also 23andme does not automatically show you diagnoses with potentially devastating results: In order to see those you have to click through multiple dialogs, telling you risks and making sure that you really want to know (you have to repeat this for each and every potentially devastating diagnosis, even if you do not carry any problematic genotype).

Luckily my health overview didn’t contain news as bad as huntington’s but still this body has issues.

healthrisks 23andme and tante

Overview of my health risks (click to enlarge)

The overview is split into 4 segments. Health risks lists conditions your body has a certain predisposition for. It does not mean that you will get a certain condition, it just means that there seems to be a certain influence that your genes have. Inherited conditions lists things that you either inherited or didn’t: Huntington’s disease is one of them. Traits summarizes certain properties your body has that are not considered to be an illness or illness-related. Drug response shows how your body potentially reacts to certain drugs.

Personally, I do have an inherited condition. Well not really the condition itself but I carry a gene that could cause the condition in my kids in case my partner had the same genotype. It’s a nasty illness that causes iron to build up in your organs making them fail. If I was planning to have kids, this would be useful to know to check for this illness early in order to cope with it. I also metabolize certain drugs very quickly meaning that the “normal” dosage would probably not work on me as intended. This also could help with my treatment if I ever had problems with acid reflux which luckily I don’t at the moment.

Let’s look at the health risks segment. We know that I do have a certain disposition towards prostate cancer (yay again icon sad 23andme and tante ) but what does it actually mean?

cancer 23andme and tante

My prostate cancer disposition (click to enlarge)

Compared to europeans I run a a 35.7% probability of developing prostate cancer till he age of 79. The risk is way lower below 50 and raises quickly then. Here we see how 23andme tries to make the statistical data comprehensible: Interactive diagrams and visualisations try to help you not panic but understand the risks at hand. The page also links the studies and scientific papers leading to this interpretation as well as potential actions you can take like for example foods to avoid (if applicable).


Personally I found the whole experience very well designed. 23andme tries very hard to help people understand what certain aspects actually mean, they link their sources and they offer a full download of the data.

But it’s not for everyone. Ignoring the cost (which isn’t low) the data would be hell for anyone with hypochondric tendencies. Also if you have a reasonable suspicion that you might carry a certain hereditary illness you should probably go see your doctor and have him or her do the testing and care. This is – no matter how much information 23andme gives you – nothing you should be forced to learn alone in front of your computer. These results can be scary and you should be aware of that.

The results have in some way helped me understand this body better, I feel like I now have at least a few pages from the manual for this model in my hands (even if it’s just the “known bugs” section). On the other hand this hasn’t helped me develop a better relationship to this physical form: The body gets reduced to data, to geno- and phenotypes. It’s an interesting rational approach but – at least for me – it didn’t change my emotional connection to this biological machine. Which I didn’t expect it would, but still …

Since I like all things open (Open Source, open science, postprivacy and such) I uploaded the data I got to the public openSNP repository of data. You can also connect to me on 23andme in case you have an account there (tante@tante.cc).

I hope I could outline what the service offers, why to do it and why not to do it. If there are any questions left you can write them into the comments or just contact me directly (if you do not want to ask publicly).



The post 23andme and tante appeared first on Nodes in a social network.

flattr this!

November 24 2013

A general hacker ethic probably isn’t possible

In order to separate the good and well-meaning hackers (white hats) from the bad and evil crackers (black hats) people, especially here in Germany but in many other parts of the world as well, have relied on the “Hacker ethic“, that Steven Levy summarized in his 1984 book “Hackers: Heroes of the Computer Revolution1.

Sadly the hacker ethic as it was written down in 1984 is a lot but not a code of ethically sound principles: A collection of catchy statements, some sorta connected to what some might consider ethically sound ideas. But it lacks consistency, it lacks political awareness and posture and while some statements might have been groundbreaking in 1984 (“you can create art & beauty with computers”) they look and feel dated today where the debate around and perception of computers has changed significantly.

In the last years I have worked on rethinking the whole code, have talked to many people about it (people within the hacker community as well as some watching that community from the outside like me) and the longer I worked on it, the clearer it becomes: I don’t believe that a meaningful, resilient general hacker ethic is possible.

Why would anyone want that anyways? Well, a code of ethics can help form the perception of a group from the outside: “These are the rules we live by”. It can serve as a tool to distance oneself from unwanted behavior and it can help form the community itself, providing a common code to unite under. All these things would be very valuable especially for hackers in these times.

The digitalization of human life is progressing with growing speed and people understanding the technology and some of the consequences are needed for the public to be able to find their bearing: Caught between the PR speak of companies and the “there is no alternative” rhetoric that tends to define so much of our political landscape the public needs informed voices to help categorize and understand the different arguments and positions. And we just can’t expect journalists to carry all that load on their own.

So why shouldn’t we be able to write an ethical code embodying the hacker ethics? The whole subculture has diversified: Where it started out mostly as computer tinkerers trying to understand how certain deployed technologies work, today we have very different big groups within the subculture.

First, we have the security people who do still dominate a lot of the public perception of hackers. People who find and publish security bugs in software, who consult companies and people in order to increase their security standards. Many people in this part of the subculture probably at least loosely associate with the whole Cypherpunks movement that tries to fight government surveillance and similar dangers through more encryption. Secrets matter quite a big deal in this group.

Second, you have the makers who invent 3D printers, who build smart clothing and find new an cheap ways to bring manufacturing techniques to the masses. They invent prototyping solutions for electronics, create art, music and craft. This group has a very playful approach to turning their visions into reality. It’s about creating fascinating objects and sharing how to do it. A very communicative, very unsecretive group.

Third we have biohackers and cyborgists, who push the boundaries of what a human is and can be: Integrating technology into their physical bodies not just out of nescessity but as a positive vision of self-improvement and experession.

Those are just three big and very different groups but there are more. And many of them clash fundamentally: Security people consider integrating tech into one’s own body lunacy because it’s all so hackable and unsecure for example.

The subculture has many cleavages on which it separates into different camps: Open-Source vs. Just-needs-to-work, political vs. apolitical self-image, progressive vs. conservative, singular topics vs. big social vision, etc. This leads to many possibly irreconcilable arguments with the same people being on one side for one and on opposing sites for another argument. This construct of cross-over arguments increases the impact of the existing heterogenity of the subculture and limits any meaningful consensus to rather hollow statement such as “Be excellent to each other”2.

I don’t see ny group or person having the influence or pull at the moment to unite the scene under a set of meaningful rules. Maybe that’s ok, maybe that scene just wants to faff around and do whatever is interesting at any point. That would just mean that we probably shouldn’t expect any reasonable input when it comes to politics. The next years will be interesting.

  1. I won’t go into detail on why the whole hero narrative given in the title is bad, I wrote a whole article about it a few months ago
  2. which is not a bad statement just not one you can actually use to decide whether a given action is morally good or bad in most situations

The post A general hacker ethic probably isn’t possible appeared first on Nodes in a social network.

flattr this!

November 19 2013

The error of seeing data as property

When talking about personal data and its protection we have adopted a slightly wrong vocabulary. We could just continue to use that wording because it’s so established but words do matter. Words cannot be disconnected from their connotations, they never exist in a vacuum.

When we talk about personal data, we treat it as if it was our property, a possession. We use possessive pronouns (“my data”, “our data”, “your data”). We use phrases that only make sense in the context of property like for example “data theft”. This is fundamentally broken.

And we know this, we argue like this when it comes to the area of copyright and the completely broken concept of “intellectual property”. We never get tired to tell copyright holders, publishers and especially authors that the ideas of property as we have developed for physical, scarce goods do not work in the digital sphere.

But when it comes to personal data (another slightly unfitting term) we seem to be blind. Not consciously but probably based on our deep yearning for control: Control of our physical belongings is something we are rather well-versed in and we feel comfortable with. We have our stuff and we decide what happens with it. We can even lend it to our friends and neighbors and we’ll get it back soon. We know where things are and who has access to them. How convenient.

It does not work like that with data. We lend “our data” to our friends, we call it sharing, but we never get them back. Data multiplies and gets copied. The implied control that the property concept communicates is wrong and in fact dangerous. Because it limits how we think about data.

It is as if we used the mental model and terms from the geocentric universe to plan spaceflight. It looks like the words and concepts fit but in fact they are wrong at the core. They are not suitable to discuss the issue at hand.

We do not own “our” personal data. It’s not ours it is about us. And choosing the right and precise words for the objects and concepts we are debating is really the first step to come up with anything close to a rational argument.

Wording matters.

The post The error of seeing data as property appeared first on Nodes in a social network.

flattr this!

Behavioral prediction

googlenow 247x300 Behavioral predictionToday is a tuesday or a thursday. Why am I so consciously aware of this?

Because my smartphone teases me. I tend to go to the gym tuesday and thursday evening so starting at noon, my phone tells me how long a trip from the place I am at currently to the gym would take if I would go by bike.

On my phone I use the Google Now service, a smart personal assistant whose whole purpose is to try to predict what kind of information I might need at any given moment. For this Now offers different so-called cards that can display routing information, the weather, new information on topics I researched earlier or notifications about my favorite artists releasing a new album. When I visit a new town, Now will show me interesting touristy spots and photo opportunities as well as an automatic currency converter and cards containing greetings in the local language.

Now bases its predictions on a few different things. First it employs all the data about the world that Google’s never sleeping search robots gathered and structured. It knows telephone numbers and locations as well as opening times of many of the places around. It also gathers data that people using Google’s services generate collaboratively and unconsciously: In order to find interesting photo opportunities Google can just look at all the photos users uploaded to their Google+ service and the locations these photos were taken. Finally now is personal so it uses a lot of data about me.1

I give Google my current location and allow it to use my browsing history most of the time. I explicitly gave Google’s services a few markers like for example which bands or tv shows I enjoy. And Google can crawl through my Gmail folder to determine dates relevant to me like for example the tracking information for a package I am expecting or a flight I have to catch.

All these different sources of data are merged and mangled, learned from and evaluated in order to be my personal assistant. And as any assistant worth its salt Now tries to predict what I will be doing or what I will want to know not only right now but also in the near future.

Google Now’s results are not bad, considering that it still is quite a young project. And obviously other companies will try to offer competing services (I’d be curious to see what Wolfram could bring to the table) to help us navigate our lives. And it does have kind of a democratizing quality: Where only very wealthy people were able to afford personal assistants keeping track of all the dates and information relevant to them now we can automate large parts of that job, can offer access to these services to more people. Which generally is a good thing.

But – just as any other signal generator that we chose to perceive and use – services predicting our behavior do also form it. I might feel lazy and would like to “forget” (as in avoid) going to the gym but Now reminds me. It makes it harder not to go, it makes it a conscious decision, forces me to pay a mental price. In case of the gym we can see this as generally positive: I’ll stay in shape and probably healthier which is not only good for me and the people who like me but also for the healthcare system that will probably save some money in the long run.

On the other hand, it gives the service provider a lot of power over me: In my case Google can very effectively direct me to certain areas. I visit a new town, Google tells me to check out this cool place, I go. But who tells me that the owner of said place didn’t pay Google to send me there? It would be rather trivial to categorize me and target me or my demographic according.

It is obviously a question of trust: I need to be able to trust a service provider in order to comfortably rely on a personal assisting service, maybe even more than the trust I need to put in other providers I use. A guiding assistant integrates a lot deeper into my life and my thought process than for example a random online shop does. I do have to trust the shop to send me the stuff I paid for and that it keeps the contract the both of us signed2 but that’s about it.

Systems predicting behavior don’t have a very good reputation: Everything related to the idea of “Big Data”, of huge databases generating non-obvious conclusions has in our public perception the taste of surveillance state and repression. Not without good reasons: Police and many security-oriented politicians have demanded to use the prediction of behavior to predict crimes. The 2002 movie Minority Report coined the phrase “precrime” for the idea of somehow predicting who would commit a crime and punishing them beforehand.

But while most people still reject the idea of precrime – and rightfully so! – I think it’s not sensible to cast aside all sorts of behavioral prediction just that easily.

There is a lot of value in helping people navigate their lives more effectively. The automation of things automatable is, all required changes to the way we distribute income to people set aside, not only somewhat nice but in fact humane. Automation, like for example Google Now, provides a certain level of quality accessible to the general public: Even if you are not rich you have access to certain technological helpers that allow you to mitigate some of the personal issues and flaws you have. You might not be great at tracking time and appointments or your sense of orientation is wonky. Technologies can help you there to act more efficiently within the world. And it does matter: While you might have a diagnosed and legitimate illness, when you are constantly late you will have disadvantages.

But predicitive technology is not always great. I have pointed out some of the possible dangers of these systems: They can be easily used against their users. How can we fix this?

I believe that transparency can help strengthen the legitimacy of and the user trust in the results: Google Now does it for some cards but it does not go far enough. A website pops up and the card says “A website changed that you frequent regularly”. This makes sense, I understand why I get that information. For photo spots it might be relevant to me to see that my friends went there. Maybe I do explicitly not want to visit the photo spots that are very popular? While Amazon’s shopping recommendations are wonky at times (I buy a present for my wife and the recommendations go bonkers) but it offers a clear path to mend the results: I can click the “why do I see this” link and get an explanation as well as a way to influence my recommendations (“Do not use this item in the future”).

I believe that services predicting user behavior can provide a lot to the users, can even democratize the access to assisting technologies. But service providers need to understand more clearly that a service such as Google Now lies and dies with trust and transparency on the service provider’s side, trust that has recently been quite shaken due to the activities of the so-called “intelligence community”.

At the moment it’s en vogue to brand Big Data applications and behavioral predictions as evil, destroying society and every single human within them but their consequences are just as varied as the consequences of the invention of the diesel motor or stocking frames. And therefore they need differentiated responses.

  1.  As some of you might know, I don’t believe in the narrative of the zealous protection of privacy and informational isolation. That does not mean that I think everyone needs to follow my path, I just wanted to make this clear in order to prevent the usual “Don’t you know that Google uses the data you give it for eeeeevil?” kind of comment.
  2. when it comes to data sharing etc.

The post Behavioral prediction appeared first on tante.blog.

flattr this!

Reposted byarenfinkregh

November 17 2013

What’s new, technology?

Whenever a new technology comes along, its pundits will tell you how disruptive1 and how gamechanging it is – nothing will ever be the same(TM).

This phenomenon isn’t new: Similar narratives were triggered by the automatic dishwasher, the telephone and nowadays by basically every new smartphone app. All these things are awesome and did change the world. Maybe not the smartphone apps but marketing in that area is equal parts nuts and on crack so we probably shouldn’t look too closely. But in general many technologies have had quite the impact (though not every impact was great).

But when it comes to innovation we tend to focus on the new implementation and not so much on the concept itself and that makes us make a few wrong calls.

Let’s look at a rather current example: Google Glass (and similar augmented reality devices). Some feel that they are something completely new, that augmented reality is the next step in human evolution. Others believe that this overlay of “virtual” information on the “real” world will make us lose touch with what’s real, true and good. And there’s basically any possible opinion in between. But is it really all that new?

This is a picture that I took on one of my runs. It shows some random street corner, nothing special about it. It’s not even pretty or anything.

IMG 20131117 142116 1024x768 Whats new, technology?

Look at how much abstract information we put there, right into the physical world: Signs telling you restrictions on parking, a clock, markings on houses telling you what to expect inside, signals telling you when to cross the road and when not.

The “layer of information on top of the physical world” that augmented reality and the convergence of the digital and the physical sphere have promised? It’s already there.

The existing implementation is not all that fancy: Many of the signals and signs don’t blink, many of them are not even properly personalized forcing you to filter manually through many signals that you don’t and shouldn’t care about. But the concept is already there.

Augmented reality does bring a lot of new stuff to the table. It automates a lot of the filtering and therefore allows us to move withing and interact with the world more efficiently. Personalisation allows us to integrate new sources of data into the abstract layer of information that lies over the physical space. All that is cool and new and can even be a source of change: New associations and connections put up just by merging in more aspects of the world. But it stays an upgrade in implementation (but everyone who has ever developed a product know how much world upgrading to a better implementation is).

Why does this matter? Is the distinction between implementation and concept more than just semantics? I believe it is.

Humans have been around for quite a while and we haven’t been lazy: In between all the wars and watching TV we have developed mechanisms and approaches to deal with different problems and concepts. We have developed the idea of personal property to deal with the problem of the distribution and control of scarce resources2, we have developed social rules and structures to allow us to live together better3.

By understanding clearly if something is a new concept or just a new implementation we can go through our library of rules and structures to see if we already know how to deal with something. It might lead to us adding to the rules, it might lead to us reviewing some but it gives us a basic approach that is tested and – at least partially – works.

Not only does this recycling of ideas help us have more time to think about other interesting or pressing issues (or watch TV), it also calms that nagging feeling of fear of the new and unknown. When Google presented its Glass project many people were quickly up in arms trying to start a rage against the augmented reality machines but when looking closer at it there’s really just details we have to adapt in our social structures to deal with it. The foreseeable social “disruption” is somewhat negligible for most contexts. Because the change is just in implementation.

The hype machine fueled by pundits and PR people implies a level of novelty that more often than not just isn’t there. That does not mean that big conceptual changes have not been or will not be possible or coming, changes that we cannot just fix by patching our existing frameworks. The way we automate work is in my opinion no longer compatible with the way we distribute resources. Automating work means that there is less work for humans to do which makes the idea that people work a job to gain their income no longer a feasible approach. And if we don’t follow the luddite way of forbidding automation we need to implement different ways to give people the resources to live.

The problem comes down to us not having boundless mental capacity. We, as human beings, need to address the really pressing, dramatic issues coming our way. To get this done we really need to focus on the conceptual changes, try to form those changes and set boundaries before the changes just crash down on us like a tidal wave4.

  1. is anyone really able to hear that word applied in technological contexts without chuckling?
  2. there are quite a few issues with that concept when people try to apply it to non-scarce objects like ideas or text but let’s not open that can of worms right now
  3. yes, many of those rules need reviews or culling
  4. this is basically what we did with automation where we never cared to look what the consequences are

The post What’s new, technology? appeared first on tante.blog.

flattr this!

Tags: english

November 16 2013

Rejected #30C3 talks

As every year I’m documenting my rejected session proposals for the Chaos Communication Congress.

title: “Human behavior is incredibly pliable, plastic.”
subtitle: The behavior-inducing qualities of settings pages
What does the look and feel of (privacy) settings tell us about how we perceive and act within the digital world? How do settings dialogs – even unwillingly – enforce certain kinds of behavior? And who is responsible? The user? The coder?
The Internet is more than a medium, it constitutes its own living environment, a space where people live and communicate. But where the physical world with its physical restrictions is something we have just been cast in the digital world is a world we fully created, a world whose rules are completely made by human beings.

But the Internet was developed by scientists and engineers who hadn’t planed the ubiquitous social network that we live in today. So social norms, rules and ways to interact with the world and its inhabitants emerged organically, unplanned, chaotic (in the purest meaning of the word).

The artifacts creating the Internet are software products which themselves come with certain biases and restrictions, projecting their creators’ view of the world onto all their users. And this projection creates and forms our behavior.

In this talk we will look at different web sites/web services, particularly their settings and privacy settings. We’ll analyze what kind of perspective these artifacts/services enforce on their users and look at what kind of user behavior this perspective creates.

Are we OK with that kind of externally controlled lifestyle? Does this correspond to our ideals such as freedom? What is the responsibility of a software engineer towards their users? Do settings matter? What impact on our behavior do settings have?

These are the questions we’ll be trying to find answers to in this session.

title: Data-driven Democracy
subtitle: A vision for the year 2030
Instead of just reacting to the news leaking out of our political system, we need to start acting, to start forming a future. This is my proposal.
The speed and pure massiveness of current events like the prism/tempora/NSA scandal have impacted our subculture deeply: When it became obvious that even some of the far-out conspiracy theories were true and had been true for a while the whole community was pushed back into state of shock. Now here we are trying to pick up the pieces, figuring out which encryption schemes still work, which tech is safe enough and what kind of horrible news might be just around the corner.

This has escalated a trend that has been going on for some time now, a trend that started when corporate services, structures and patterns overtook the Internet as we had know it. We let ourselves be pushed into a purely reacting role: Something would blow up and we’d try to fix the mess, a new service would emerge and we’d have to see whether it made sense. And while we still looked at the different pieces, they had already been washed away by something new.

We need a vision, we need to start acting again, need to shape the web and the future and for that we need to know what we are working for. Here’s one, let’s safe democracy.

Democracy is in trouble. Political processes seem to have disconnected from the lives of the people, laws basically just “happen” according to the “there is no alternative” dogma. Suddenly we have Internet filters, security tools are illegal, in spite of broad social support homosexuals still don’t have the same rights and many people can’t even make a decent living working 60 hours a week. It’s a system running wild. But we can debug it.

We are in a unique position because we understand the importance of data. Decisions need to be grounded in data, discussions must be structured around data. Biases, beliefs, intangibles have to be challenged by data, by something that can give people knowledge about the world that goes beyond “he said, she said”.

In this talk I will apply the techniques that we all use every day to our global political system and will develop a small “patch” to solve some (not all) of the current problems we have. Finally I’ll outline a way to implement the ideas based on our current system.

There’s not one perfect or necessary way to move forward for our subculture. While there can be other approaches, I am proposing using our skills, our perception of the world to form a political utopia. Because technology is political.

title: Extending the commons
subtitle: The only winning move at data monopoly is not to play
Information is power. Faced with powerful entities (governments, corporations) we need to develop new methods to empower the individual because whatever it is we are doing isn’t working (enough). We have all the pieces in our hands, let’s extend the commons and destroy data monopolies!
The last year has brought us many news we didn’t like. Apart from many companies gathering all kinds of (big) data about us and what we like and buy and watch different governments used their intelligence services to build big databases of information about all of us, our communication, our social networks.

So here we are, separated into small groups or even simply alone faced with the enormous power of the “evil empire” and its economic powerhouses. While we can encrypt our personal communication, can try to stay under the radar, the data monopoly that the governments and cooperations have build is not going to become any smaller this way.

But what can we do to challenge the existing data monopolies? We need to extend what we call “the commons”: The open data initiatives and the Freedom of information laws that some countries have passed are a start, now we have to go on creating commons that reduce our dependency on the given data monopolies. We have to be able to take science back, take biological or pharmaceutical research back. We need to create ways for us to do our own social studies. And we need to give information to the people.

The data monopolist (or oligarch) has, just by knowing more about the world and the people in it, a power advantage against any one human or small group. The extended commons can change that: When dealing with powerful entities the individual is in a bad position, often just because he or she lacks information. Negotiating a job contract is hard when you do not know exactly how much an hour of work is worth at the moment in a certain area. Fact-checking a medical study is nearly impossible because the only entities having the data to do any real research are the corporations who we are supposed to keep in check.

The pieces are all available to create our own, common databases of information helping the individual mitigate the structural advantage the powerful entities have. Let’s create the extended commons.

In this talk I will illustrate the kinds of data-derived power that governments and corporations have. In the following I will analyse which of these monopolies/oligopoles we can reasonably subvert and which data silos won’t die that easily. Finally I’ll show some ways of how we can build a counter-pole to those monopolies and oligopoles. We’ll end the session with a few general ideas and processes that can help us challenge existing and future data monopolies/oligopoles.


The post Rejected #30C3 talks appeared first on tante.blog.

flattr this!

November 15 2013

On growing government data requests

Yesterday the official Google Blog released documentation about the amount and sources of government requests for user data. It’s an interesting set of data showing which governments request data and how often data is produced. These publications are a very relevant piece in the large puzzle called “checks and balances” and make journalism and the control of government actions possible.

Now we know that not every one of those requests is OK or legal. Many of those requests are in fact bogus or unnecessary and we need to get rid of that kind of power misuse (and that’s what it is: A government agency using the power it was given in a way not intended or allowed by the people).

But one point that I found interesting was the heading. The blog post is titled

Government requests for user information double over three years

Over the last 3 years government requests doubled. That sounds huge, doesn’t it? And this is just for Google. What’s with younger platforms that only recently gained traction that don’t even have that kind of baseline? Are government requests exploding? Is this just another sign of an oppressive government pushing back against the internet and fighting against it with an iron fist? The answer is probably no.

The Internet has become mainstream. In the US about 80% of the population use the Internet. And the type of use changed as well. When a few years ago many people just used the Internet for shopping or a few news sites today the use of social networking goes without saying not only for teenagers but throughout most age and social groups.

This means that more actual life happens on the Internet, more people start living at least parts of their lives in this digital sphere. But when more of life happens online it’s rather obvious that a shift of activities that law enforcement has to deal with into the online space has to lead to law enforcement being active online as well.

The explosion of requests is a sign that the Internet is now really fully mainstream. And the mainstream is not the small elite of scientists and artists building their intellectual utopia, it’s a melange of everything human: Beautiful and evil and petty and empathic and brilliant. So crime is part of the deal.

We can argue at length if for example the way that even minor breaches of copyright law are being prosecuted is OK (it’s not and completely over the top). We can also argue that because of the relative ease with which these requests can be made by the government there are probably more than necessary. We need to raise the bar for when that kind of request can be made, we need full traceability of who requested what kind of data for what reason with which legitimation. But declaring all those requests evil and their increase in general a sign of the evil empire is wrong.

Because, and this is one of my pet peeves, we need to find reliable, democratically legitimated ways to enforce rules online. It’s easy to believe in the libertarian dream of a fully uncontrolled and unregulated Internet when you’re a well-paid, skilled, white, male IT consultant1 but we do have real issues online that need addressing: There is fraud, there is a lot – and I mean A LOT – of mobbing going on. And there are many people who just want to use the Internet, who have the right to participate and communicate without being harassed.

Right now we only have local governments and their law enforcement. And obviously that doesn’t align right with the transnational Internet: Something illegal in Germany might be fully legal in the US and vice versa. We can use those specific cases and derive that local police and local law don’t work online so they should just stay out of the Internet, for Freedom(TM) and against Surveillance(TM).

That the perspective of those who are powerful online. Those with the skills and resources to defend themselves online. The high priests of the Internet. It’s not mine.

  1. this is an exaggeration, you don’t need all those privileges to be Internet Elite(TM)

The post On growing government data requests appeared first on tante.blog.

flattr this!

November 14 2013

Externalizing tech

Many of you have probably see the totally awesome recent XKCD that provided simple answers to questions about technology.

simple answers 246x300 Externalizing tech

XKCD – Simple answers link

Not only are the answers Randall Munroe has added here very well thought through, the collection of questions also is quite representative. Taking one of these questions allows you to write a quick “OMG THE INTERNET MAKES US HORRIBLE!!!” text for conservative newspapers and publications (in fact some people have made quite a comfortable living doing exactly that). Take three questions and write a book about how the Internet will destroy society and you’ve got a bestseller on your hand.

But the way Randall phrased the questions (just repeating the way these questions are phrased in the media/discussion) shows a deeper problem: The externalization of technology.

Will Technology X do Y to us?

Technology is seen as something external from us humans. Technology does something to us, we are not the subject but the object, the victim.

But that is wrong. Technology is not something that weird ancient aliens rain down on us, something that just magically happens to us. Technology is not the weather or an earthquake. It doesn’t appear out of nowhere.

All technology comes from the human wish to gain a better grip on the world, to extend one’s reach in whatever way: It might be based on the wish to see further or to be able to influence objects at a greater distance. It might come from wanting to lift something heavy or from the wish to travel more easily. More often than not it all comes down to laziness (I still consider the dishwasher one of the greatest inventions of all time).

Technology also has a deeply cultural foundation: You can learn a lot about a culture, a society by looking at what kind of technology it develops or tries to develop. This is not just about whether military development gets a lot of funding or not. It has a lot to do with a vision.

The stories we tell about the future shapes what we strife for, what we consider to be “the future”. If you look at the classic Star Trek series you will find things that have found their ways into our current tech. The shape of clamshell phones is a direct implementation of the communicator, and when scientists started developing hand-help sensor devices they called them “Tricorder“.

Many people from my generation (I was born in 1979) still complain about the future not delivering on what was promised to us because the flying skateboards from Back To The Future are not available.

We could go on like this for hours: Comparing existing technologies with older stories and using current stories to project what kind of devices we’ll try to build in 10 years. But the more of those we list the clearer it becomes: Technology is deeply rooted within us, it emerges from our individual wishes for better ways to influence the world and from the narratives we as a society develop.

That is why we cannot talk about technology without talking about the people using or developing it. It’s also why regulating or restricting certain specific technologies (just as some people want to do it with Google Glass for example) cannot work: The social system that created the tech will create a replacement to fill the same wishes and desires.

Technology does not just provide tools and gadgets. It also tells our story. But not the story we actively write, the story that makes us look all noble and superheroish. It tells the story of what we want, what we invest time in, what we spend effort on. And the lack of certain trivial technologies also tells us what we do not value.

if we look at the world and what technologies get developed we get a different understanding of the world. One that might even be shameful at times. One where we have to account for letting many people die of curable diseases.

We are used to talking about tech like it was external because that allows us to externalize many of our darker sides. It’s a cop-out, nothing more.

The post Externalizing tech appeared first on tante.blog.

flattr this!

Reposted byleyrer leyrer

November 10 2013

No Future Generation

Yesterday I ranted about SciFi and how it has turned into just another reproduction of boring cliches and tropes about violence and aggression, a set of stories about physically dominant males saving the world, stories that devalue competence and smarts.

But while my complaints from yesterday are true there is an even bigger problem with our narratives (not just in SciFi). I believe that we live in a no future generation.

Now I am not referring to a revival of 80s nihilistic punk ideas but to a general lack of a vision of the future in any meaningful and actionable way, shape or form.

Where the 50s had a vision of the future that was based on technological progress, flying cars and robots cleaning our houses (think “The Jetsons“), where the 70s had a cuddly view on the future where everybody loved each other and mother earth would bring us all together (I simplify) we have … nothing. Well, not nothing. We do look at the future but not through the glasses of a positive vision or some goal. Right now our culture has really just one way to look at the future: The dystopia.

0511 1010 2116 4138 Villain Wearing a Top Hat Twisting His Pencil Thin Mustache clipart image 219x300 No Future GenerationEverything about the future is bleak. Earth will be destroyed by us, our pollution, our wars or whatever. Privacy and civil liberties will be killed by companies and governments who all scheme and plot to ruin the world while twisting their mustaches in a menacing way. There also might be zombies. Everything and everyone is fucked.

The future is something to be afraid of, to fight and to prevent from happening as long as possible.

Dystopian stories have always had a very important role in our societies. They allowed us to reflect current developments and problems by projecting them into the future. But dystopias don’t really talk about the future, they talk about the now by exaggeration.

A dystopia can never help you find your way, it cannot give you any meaningful goals (apart from the obvious “Prevent X” movie plot). It can tell you where not to go. But it still leaves you alone when it comes to deciding which path to follow.

This becomes obvious when looking at how our activists and NGOs and defenders of civil liberties present their case: It’s all about saying “No” to change and about defending the status quo that was the vision the bourgeous generation before us had. The narratives were copy and pasted from our parents’ notebooks and now our notebook is full with things that we believe need protecting but empty when it comes to the question: “Where do we go from here?”

This is the reason that so many of the issues connected with technology seemingly cannot be resolved: People all claiming how important their privacy is still using all those social networking tools, people claiming how much their care about a clean environment driving SUVs.

The world moved on. Not without us but without asking us: We used technologies to transform our immediate wishes into products and those products changed the way the world works, the changed the ways we can interact with the world. Without looking – en passant – we changed the world but our narratives didn’t develop with them, didn’t change.

“The word like it was in the 70s just with Wikipedia” is not a vision for the future. It’s only one thing: Lazy.

Now I know that the way I see the world is not how everybody does (and how boring would that be). Maybe I am wrong and privacy is not just a bourgeous phantasy. Maybe rethinking what constitutes a human or even a consciousness is not the way to go because how we saw in in 1985 was perfect.

But no matter how you see the world, no matter which of the existing dogmas and ideas you want to save/destroy/salvage for parts, develop an idea how you want the world to be in 50 years. It’s not a game or a bet. We’ll not check up on you in 50 years to tell you that you were wrong, that’s not what a vision for the future is about.

Your vision for the future is you goalpost, something you can work towards. Something that you can use to reflect on any idea someone throws at you: Does that new idea help you to reach your goal or not? You can share these ideas. You can discuss them. They are the only way for your to really shape the future instead of just being a little boat on an ocean of (seemingly random) change.

Be a part in shaping the future.

The post No Future Generation appeared first on tante.blog.

flattr this!

Reposted byarabus arabus

November 09 2013

Incompetent humans in space (the sad state of SciFi)

Yesterday evening me and my wife watched the new (ok not that new) Star Trek movie. I don’t want to write a real movie review here (others do that better than I could) but apart from the obvious flaws (Lens Flare Galore, Women who were basically just there to stand around in underwear, etc) something else stuck with me.

Big SciFi (mainly in movies) has, for a long time, stopped doing its job. While SciFi can just provide some entertainment IN SPACE!!!11 I believe that the real reason for SciFi is to reflect upon what we humans are doing right now. By confronting human beings with new situations, alien species and mainly everything different it allowed us to talk about current or upcoming issues somewhat distanced from their real-life contexts and prejudices. It brought some level of abstraction to the debate that helped developing actual discussions. And all was well.

But lately SciFi is often just the backdrop for generic action flicks: Explosions, boring generic action-hero-narratives and all the tropes that we have already learned to hate in the usual action films. Pointless.

In Star Trek, Kirk is not competent as captain. His skills are utterly lacking, but he can punch people. And shoot. And jump. He’s Super Mario with big eyebrows instead of a mustache.

Movies lately seem to be unable to think of protagonists that are something apart from a punching, shooting and running Jump’n'Run character (with certain exceptions, the movie Gravity comes to mind). Conflict is resolved through force, through weapons, never through dialogue or cleverness (unless the cleverness means finding a new way to punch people).

Why do skills out of the typical action template have such a bad standing? Is it the quick and simple answer of the patriarchy trying to push back on different kinds of social developments? Is is simply laziness by the writers and filmmakers? Or do we as a society where manual, physical labor and physical struggle for many has little importance therefore see the action-movie-skillset and -plots as something different, something fresh?

I don’t have a solution but personally I am very much done with punchy-shooty-stories where social skills and intelligence do not matter. Oh and while we are at it, we’ve probably seen enough dystopias for a while (but that’s a topic for another post).

The post Incompetent humans in space (the sad state of SciFi) appeared first on tante.blog.

flattr this!

November 04 2013

An excuse never to fix anything

Bill Gates is known for his philanthropy. He and his wife have spend many many millions of Dollars towards finding cures or vaccines for many of the illnesses torturing the so-called third world. Kudos to him and his wife for that.

In an interview with the Financial Times a few days ago he was asked to comment on Facebook’s Mark Zuckerberg’s statement that bringing the Internet to every person on this planet was “one of the greatest challenges of our generation” Gates stated

As a priority? It’s a joke. […]
Take this malaria vaccine, [this] weird thing that I’m thinking of. Hmm, which is more important, connectivity or malaria vaccine? If you think connectivity is the key thing, that’s great. I don’t.

So Mr. Gates says that while giving Internet to everyone is a good idea, it is his belief that there are other, more basic, more pressing, more important areas to fix before even thinking about connecting people. Sicknesses, hunger, war are way more important.

Now Gates has the right to set his own priorities just as everyone else. He’s chosen to invest his money in vaccines and medicine. Coolio. But does that make other approaches redundant or as he says “a joke”?

There are many different kinds of problems in the world. Some have to do with health, some with sustainability and many are actually a problem based on a really shitty way we use to distribute resources on this earth. And obviously some problems are more existential than others.

But if we all adopted Gates’ perspective what would we end up with? Standstill.

Because there’s always a bigger fish. There is always something more drastic or differently drastic. Why cure Malaria when people have nothing to eat? Why help them build up agriculture when there’s till war?

The belief that you can plot problems or needs on a simple line from “irrelevant” to “most relevant” is naive at best: Problems differ in relevance depending on who wants to help, who needs the help and what current events shape people’s lives.

Getting everybody Internet doesn’t replace curing Malaria, just as being inoculated against Malaria doesn’t fill your stomach. But all those activities help. Information helps (as in the Internet), medicine helps, food helps. Help helps.

There’s a frequent misconception that the money flowing towards bringing people Internet is money that would otherwise have gone towards medicine or food, but that’s not true. Maybe the money comes from a company trying to help that has no resources when it comes to medicine but that can build networks. They might not throw some money to a random charity but want to get involved and they pick an area they know and understand. And area where they know their money can have an impact.

Or we can participate in a pissing contest to see who can find the one most important issue. And till then we can wait. Doesn’t sound like a great idea though.

The post An excuse never to fix anything appeared first on tante.blog.

flattr this!

August 15 2013

Death of the Super Hero

…Privacy-Man and Crypto-Girl are not wearing pants

The last weeks have been hard for the Internet. Not the network on a technical level but for the people it consists of, the so-called Layer 8, basically: Us.

When the news about the actual dimensions of the activities of different government agencies in the Internet hit us, many of us were left in a state of shock and awe, a state of pure and utter disbelief: The NSA (and it’s cousins from other countries) did all those things we never thought possible. The dystopia had become reality.

We know now that the NSA records basically everything, even – no, especially –  the pieces of data they cannot decrypt yet. “Yet” being the most relevant term here. Cryptography as we use it today is always a bet on the opponent not having huge amounts of processing power to solve difficult mathematical problems. But given what we know about how bad a lot of encryption is implemented and the amounts of resources and people government agencies can throw at the problem, many encryption algorithms and commonly used key sizes will soon be no more effective than some kids using secret ink to write their little notes to each other.

But the cries of the netizens were mostly left unheard or at least unacknowledged: The mainstream media reported it and basically moved on and when asking the people on the street, most don’t really care too much, either because they have more urgent matters to focus on (such as how to make rent while still being able to buy food for their kids and themselves) or because they just don’t believe that the activities of the NSA and similar agencies harm them. The majority of people are no terrorists and the promise of safety and security (as empty as it may actually be) carries a lot more value to them and their life than abstract concepts like surveillance.

In one aspect the mainstream and some Internet activists are in line though: Both always knew that the intelligence apparatus could listen in. Emails have always been more postcards than actual letters with envelopes and the so-called metadata1 would still stay visible even if the email itself was encrypted.

We have always known that it sounded wrong that – while every DRM-type encryption on movies, video games or music was broken in days if not hours – the data we put out there could easily be defended through certain simple to use crypto tools. But we always had a fallback that made it all OK, we had our super heroes.

Super Heroes are not a new thing, they predate movies and comic books and all those things we might nowadays associate with them: Hercules? Super Hero. Siegfried of Xanten? Super Hero. Joan of Arc? Super Hero. Our ancient (and less ancient) myths are full of those larger than life characters that could tilt the earth just enough to make things OK again (though admittedly many of them had their fair share of tragedy and defeat as well).

In the Internet narrative, the role of the Super Hero was filled by hackers. Hercules, Siegfried and Joan were now called Mitnick, Applebaum or Assange but they filled the same role: To make things OK again. In a digital world full of problems that changed our perception of privacy, secrecy and transparency we rested the responsibility to push back against the “evil” on their shoulders. A responsibility many hackers just too gladly took.

In the hacker narrative, the governments and companies were mostly movie plot villians: Often slightly clueless, twisting their moustaches while explaining their evil schemes to the protagonist who then pulled out his or her secret weapon from his or her tool belt and defeated the enemy. The end.

Our media mirrored that narrative closely: Movies like The Matrix and many others have pictured the hacker as the high priest of the digital age, the battle mage making the impossible possible with a few keystrokes and sometimes a little soldering. Amongst the most successful TV shows these days are a big number of CSI like shows that recreate basically the same mythos of the wizard with a keyboard who can zoom into any grainy picture 10 times to uncover the truth and who traces IP packets all over the planet from a fancy looking graphical tool.

And whenever the weight of the world, the truth of our digital communication and possibilities of the intelligence apparatus came up, we turned to the hackers and we begged: “Save us!” And they answered.

We got Tor, we got more encryption algorithms and tools than we could count. Harddisk encryption reached mainstream audience, OTR was built into many Instant messenger clients and worked transparently and mostly simple to use. The hacker’s magic bag of tricks seemed to be able to create tricks, hacks, workarounds and security layers faster than any company or government could churn out threats.2

And that is why this scandal has hit us, the Layer 8, the people who actually live on the Internet and not just see it as a glorified teleshopping channel, so hard: We lost our super heroes. We looked and realized that Privacy-Man and Crypto-Girl are not wearing pants, that their tool belts seem to be empty.

We see CryptoParties popping up all over the place in a last ditch effort to save the old narrative, believing that we can get the Genie back into the bottle by explaining how people can pull themselves, their opinions and goals out of the spotlight. By creating a new age of secrecy and disconnectedness that would keep the intelligence out of our lives.3

But only communicating in the dark, hiding one’s opinions and connections will not help our democracies. Because a strong democracy is based on communication on networking, on the constant exchange and discussion of controversial ideas. What is often called “digital self-defense” will in the long run not save democracy but just help a different system of oppression to take its place – it is in fact just running away from the problem.

What can we do?

Get over our self-constructed myth of Super Heroes and back to work. I do agree with Jeff Jarvis in arguing that companies should do more to fight for their users. it is in their interest because in the end the scandal falls down on their feet: Google, Facebook and all those companies might just be following the laws when they give the NSA and other agencies access to their user’s data but still get all the flak for it happening. But more importantly, we need to start changing our perception on intelligence agencies and our laws.

Intelligence agencies spying on other countries and their citizens can, in this digital world, only be compared to using weapons to attack the other country. Our globalized world gets smaller every day with people’s social connections increasingly neglecting to care about national borders. We can no longer accept to have publicly funded agencies playing the secret aggressor against the world.

We need global treaties on intelligence disarmament, we need to change our local laws to no longer accept spying on people by a government agency just cause those people have the wrong passport. The equation is simple: If your agency spies on my and mine spies on you and they collaborate, they spy on everyone. If we don’t want that to happen (and I refuse to believe that we do) we need to get rid of the old school cold war type intelligence agencies that are build on a foundation of xenophobia and hate. We are better than that.

The old narrative of Super Heroes protecting us against evil have always kept “the evil alive”, have stopped us from dismantling it. We didn’t care to get rid of our intelligence agencies because we didn’t need to care about them. They were stupid and we had hackers and their tools. From that perspective maybe this collapse of our narrative is a good thing, helps us to shift our focus from implementing tools helping a small elite to circumvent certain threats to starting a political campaign to fix the actual issues. I hope we will.

Ceterum Censeo intelligence apparatus esse delendam. 

  1. that means the data describing certain properties of the actual data such as for example the target email address in an email message or the date it was sent at
  2. and the government and companies were kind of stupid anyways, right?
  3.  CryptoParties do obviously have their place and helping more people understand how to encrypt their laptops and how to choose better passwords is a great project that I fully support

The post Death of the Super Hero appeared first on tante.blog.

flattr this!

July 24 2013

June 13 2013

To be forgotten

or “trying to facilitate perfection”

Every social concept has its memes, the phrases, images or ideas that always tend to come up sooner or later when discussing the concept. When talking about privacy at some point the “right to be forgotten” will pop up.

This idea is within the top 5 talking points of the current global privacy discussion that the Internet has reignited. Even the European Union is currently trying to put it into law under the cheers of privacy and civil rights activists.

Being “forgotten” seems to be the silver bullet to all the privacy issues we are having as a global society: You accidentally uploaded an unflattering photo? Have it removed forever. You wrote something stupid? Away it goes. You are scared that Facebook, Google or whoever the boogeyman currently is knows too much about you? Get removed from their servers, completely.

Now the proponents of this idea have a really mighty argument at their disposal that tends to resonate well with the crowd due to its apparentness: Human beings forget, so why shouldn’t databases?

Most people have no eidetic memory so they have a hard time recollecting how things really were even after short amounts of time. For example: What’s the subtitle of this article? You read it a few seconds ago. But most of you probably didn’t remember it (Kudos to those that did!). Our society relies heavy on this “feature”: We can rely on that our mishaps and mistakes (if not too grave) will soon be forgotten, gone like so many memories before. Why not build our technology to emulate this behavior?

Because when celebrating our own forgetfulness we are cheating ourselves. While everybody would probably be glad to have his or her missteps erased from history we do everything we can to not have that happen. In school we teach kids how to focus, how to commit things to your mind. We admire people with eidetic memories or people with big memories. Especially in societies that value education the trope of the wise elderly professor knowing everything is still seen as something great and having someone like that in your life or past is considered great luck.

We don’t want to forget, forgetting is a bug. Well to be precise, unintentional forgetting is a bug — many would probably sell a kidney to be able to just erase the traumata of their present or past.

In 2007 Terry Pratchett the famous author announced that he had Alzheimers disease, a condition that at some point will start deleting his memories (amongst other things). And many of us were horrified, losing our memories is one of the worst things most of us can imagine. All those images in our heads that we cherish, the feeling we had when graduating, the first time you made your partner laugh, the warmth of your first kiss…

We don’t want to forget. So claiming it to be a great feature we should implement in our technology is just us bullshitting ourselves.

Look how successful historical documentaries are. When the Paris1914 Project released color pictures from the Paris of 1914 how many people browsed that page, shared it? How often have you heard the piece of advice that you shouldn’t buy gadgets and trinkets but spend your money experiencing things (and generating memories along the way)?

We don’t want to forget. But we ask for it because of our manic attachment to perfection (which I actually wrote a longer article about a few weeks ago).

We don’t want to forget, we just want others to forget our imperfections. And that is a whole different ballgame. Suddenly it becomes less about us and how good it is for us to forget the bad things in our live, it becomes about us trying to control the world and all the people in it.

Like a child we want to be god, want to decide what the different dolls we play and entities we interact with are supposed to know and what they must forget. It’s not something we want as a pact amongst equals — it’s something we want to hide behind.

The electronic databases all over the Internet belong to different companies and entities and it’s very easy to point at them as being the enemy, the monster with beady eyes tracking and saving our every move. What that perspective ignores is that we have made those databases part of us, part of our digital exoskeleton. Our social connections on the net, the archives of our ideas and comments and pictures and likes that our friends attached to it are a part of us. And them. It’s something we share. And that makes it something we really have no right to destroy unilaterally.

The right to be forgotten is a seemingly simple and effective solution for a real problem. But it also creates new problems: People could remove their part of a debate leaving other people hanging when trying to understand what was talked about. We are effectively putting a “best before” date on our history: Learn what you can from recent events while you still can.

In the end it boils down to us taking a bad habit from the world we know, a habit that causes stress, pressure and the constant feeling of being insufficient, and trying to implement all the necessary steps to make the new digital world obey the same rules.

And that is really just sad.

(This post originally appeared on Medium)

The post To be forgotten appeared first on tante.blog.

flattr this!

May 26 2013

“Through the Google Glass and what Malice Found There” @SIGINT 2013

I just got the confirmation that I’ll be speaking at this year’s SIGINT conference (which is my great pleasure since it is an awesome conference). My session will be:

TitleThrough the Google Glass and what Malice Found There
Subtitle: regulating technology and data use

Abstract: The cry for regulation comes with every new technology or use of data. I believe that instead of focusing on specific products we need to develop a consistent pattern that we can apply to new ideas and technologies. I’ll describe such an approach based on historic examples and the basic properties of data and technology.

Description: Regulation of technologies, especially those considered to be harmful, is an important task for any given society. But ensuring a level of security and probably fairness has a tendency to limit personal freedom for the individual. This tension has always sparked heated discussion: How far can we go to regulate things we do not want? How much freedom do we want to give people? And how effective does a regulation have to be in order for it to be legitimate?

For quite a while now (but especially since the Internet has integrated itself deeply into our lives) we have argued a lot about how we should treat data and what kind of operations we should allow with data. Some have argued that data is like a weapon — inherently dangerous — others have proclaimed data to magically bring a better future for everyone. Obviously they can’t both be right and both are probably largely wrong.

In the last year the whole discussion flamed up again when Google announced its project Glass, a device bringing the Internet and all sorts of possible sources of data right into the user’s view while simultaneously allowing said user to instantly take pictures and video or audio recordings of everything in sight.

The reactions to this new product (that nobody apart a few Google engineers had tested yet) ranged from enthusiasm about an interesting new technology to open hostility towards people even considering using it — the derogatory term “Glassholes” was quickly coined and certain people even considered physically attacking people using this device their right of “digital self defense”.

Before anyone had really used the device in real-world situations, in fact before the public even knew what the device could actually do, we had an avalanche of ideas on how to regulate this specific technology: Ban it completely, ban it in public spaces, make recording of pictures impossible, make uploading of images or videos to the cloud impossible and many more.

But is that a reasonable approach? Do we want to play regulatory catch-up with every specific technology? And how far are we allowed to regulate what people perceive?

In this session I want to talk about data and about regulation. I’ll start
with analyzing the concept of data and where it belongs into our human cognition. What actually _is_ data? Has it anything to do with 1s and 0s?

After having understood what data is I’ll illustrate where regulation
can be reasonably applied and what limits they should have (regardless of the actual technology at hand). We do actually have a long history of regulating technologies, let us find patterns that we can apply more generally.

I’ll finish by applying these abstract rules to a current example, Google Glass.

Hope to see you in Cologne!

The post “Through the Google Glass and what Malice Found There” @SIGINT 2013 appeared first on tante.blog.

flattr this!

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!