A drone is delivering my Amazon order through a window.
A self-driving car is cruising around the corner.
An automatic coffee maker is already responsible for my morning gymnastics or the mental breakfast ritual.
I'm sitting and programming my day, while writing poems, enjoying the life around me.
Digital ghosts surround me.
As digitized devices, things or services are provided with own consciousness. In what appears to be a controlled modesty, we look at the evolution of the algorithmic material world, to which we increasingly delegate our worries, freedoms, hopes, dreams and decision-making power. The question is what kind of relationship can we cultivate to automatons?
There are algorithms that are written to do what we do not want to be bothered with. But if we think that all rational accumulation of historical experiences can be boiled down to an algorithm, and given to computers they can do better than any humans, then we lose the game. Artificial Intelligence has granted us with many beautiful things, which decide for us and offer themselves to us as partners for a dialogue. Physically we are no longer dependent on them, but mentally they allow us to see things differently.
The problem arises when we allocate the same ego (I) that resides in human intelligence to the artificial intelligence. To think of the "I" as of the same kind, means that different intelligences are in direct competition. Some might call it the beginning of dystopian scenarios, others just a new step of humanities towards a cognitive decline, liquidizing the inner request of an identity. But appropriate coexistence of both requires understanding of their uniquely different modes of processing information.
If we are able to understand that plant intelligence, objective intelligence, artificial intelligence and human intelligence are different kinds, then the question that arises is that of communication and forming of communities, and not of competition. It is the question of politics.
Development of autonomous vehicles is momentarily entirely driven by the private sector and large corporations dealing with them. All the choices they will make are private choices that deal with collective values, which enforces a tremendous change in social legislation, challenging common practices in public spaces. Autonomous vehicles will barely occupy garages more than the luxurious achievements of aesthetic perfection, but most likely they will displace the understanding of the public spaces. It is easy to imagine coming across cars without drivers in the near future, maybe even without passengers. This can trigger unexpected social reactions within mixed spaces of humans and devices as autonomous cars. What becomes crucial is the notion of responsibility - even if everything is automated. Making critical choices in critical moments is still a very valuable task on the side of human intelligence. AI can calculate risks and give potential effects, and give statistics.
The question is then how to train humans to devices around them?
In order to breach this issue of unexpected encounters with artificial intelligence, a driver school for autonomous driving, as prime example, is installed at Kulturfolger, offering variable manuals and objects, which might serve to train the relationship to autonomous subjects in a digitized surrounding.
Hannes Brunner favors ephemeral materials in installations. In his contextual art projects, different media are combined with social processes, from digital communication into real, physical space. Recently, in Berlin, he proposed a project for the installation of a robot on a square, which, like a street artist in an acrobatic act, spits out algorithms from research laboratories as metal threads and forms them into a giant ball of yarn. In the US, he has had his own relationship with an industrial robot cast in iron. For Kulturfolger, he will comment on the process of changing relationship to autonomous cars.