logo_12bfree.com

Humanity's future in the hands of machines

The future is bright. Or is it?

The following is a transscript of the documentary iHuman (2017).
The film looks at the perks and drawbacks of artificial intelligence (AI). The filmmakers interviewed people from various disciplines who have or used to have a job-related affinity to the topic.

A lot of care has been exercised for the content to remain true to the original.
Some statements were abridged and rephrased to ease reading. Minor concessions had to be made due to the German voiceover of this ARTE broadcast, which at times did not allow to hear individual words originally used by the interviewees.


iHuman (Norway 2017)

Synopsis (Random Eletronic Program Guide)

iHuman is a political thriller. It feels like science-fiction, yet it deals with the prosaic truth of today’s world.
This documentary is about Artificial Intelligence (AI, AGI), its power and the social control it exerts. Its makers prove an extraordinary insight into the booming AI industry and demonstrate how the mightiest technology of all times transforms our society, our future and the basic understanding people have about themselves.

Announcement by ARTE’s TV host

AI can help fight humanity's problems but also harbors dangers.

- Climate change

- Poverty

- Disease

- Pandemics

- - e.g. the Corona app

- - e.g. the Chinese police app to control whatever restrictions were imposed by the auhorities.

- Surveillance made easy

- The Corona crisis surveillance app is a tiny foretaste of both the blessing and curse of AI.

Stephen Hawking – Theoretical physicist

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.

Opening Credits - Random TV voices

Intelligence is the ability to understand.

What we know we pass on to machines.

We can't control the machines equipped with AI.

AI is at its infancy.

Data is the new oil.

Today, Amazon, Google and Facebook are richer and mightier than any other company in the history of mankind.

A handful of people in those enterprises control and steer the thinking of billions of people.

This technology changes the meaning of what it means to be human.

Max Tegmark – The Future of Life Institute

AI is simply not biological intelligence.

And intelligence itself means the ability to reach one's goals.

I’m convinced AI is either the best or the worst thing that has ever happened to humanity.

We can use it to solve all our problems.

- cure diseases

- curb climate change

- do away with poverty

But we can use the same technology also to create a brutal global dictatorship with surveillance, inequality and suffering.

That’s why this is the most important discussion of our time.

Zeynep Zufekci – Technology Sociologist

AI is everywhere. We now have thinking machines.

AI decides what kind information on social media like Facebook we see first and which is dropped.

AI estimates the risks we carry when applying for health insurance.

AI analyses your resume for job descriptions.

Eleonore Pauwels – United Nations University

We are made of data. What we do, how we talk, how we behave, what we love, what we do every day.

IT specialists develop algorithms that learn to digest huge quantities of data in order to analyze every aspect of our lives.

We are facing a form of precision surveillance.

We could call this type of surveillance 'algorithmic surveillance'.

Nobody is unrecognized. Nobody can hide. Surveillance is omnipresent.

Nearly all of AI development is in the hands of a small number of enterprises or governments.

Ben Goertzel - former chief programmer of Hanson Robotics

Almost all of the development of AI today is in the hand of a few companies and governments.

The main reasons for developing AI currently are

- killing

- espionage

- brainwashing

- A huge surveillance apparatus is being groomed by various governments

- A massive advertising industry is being fed with information of what ad must be put on for each individual

Max Tegmark – The Future of Life Institute

Humanity is at an inflection point.

- our understanding of AI is still very narrow

- the starting point of AI was the goal to create an AI that is better at everything than humans.

- in principle, we are building a god.

- it is going to revolutionize life as we know it.

- it's very important now to step back and consider thoroughly what kind of future society we want to have.

Zeynep Tufekci - Technology sociologist

- We are in the midst of this crazy revolution

- It's like we are cultivating a new life form, an offspring

- Just like with your natural offspring, you cannot control everything it is going to do.

Jürgen Schmidhuber - lead researcher NNAISENSE

Onscreen display

Schmidhuber is referred to as the 'father of AI'. His work fundamentally changed machine learning and AI via face and speech recognition, used by billions of people worldwide.

- We live in a privileged time. We are going to experience that AI will outdo us in most, if not all essential disciplines.

- everything will change

- a new life form is emerging

- when I was a boy I asked myself, how can I maximize my influence?

- I realized that I had to build something smarter than myself to reach that goal.

- so that I could lean back while that smarter thing keeps augmenting itself and solves all problems I cannot solve

- multiplying my tiny bit of creativity endlessly. That's what has been driving me ever since.

How do I build a many-faceted AI?

- speech recognition

- pictures

-- handwriting

-- faces

- we have made good progress at that.

e.g. LSTR, neural networks we have developed in our labs at Munich and Switzerland.

- they are being used for speech, and video recognition and translation

- they are in all smart phones, in one billion iPhones and in 2 billion android phones.

- so we are producing a large number of byproducts on our way to the general goal

- the general goal is the creation of an artificial general intelligence (AGI)

-- can learn how to enhance its algorithms on its own

-- so basically, it can improve the way it learns

-- it can also recursively improve the way it learns, within the basic boundaries of computability

-- One of my favorite robots is this one. We us this robot for studies concerning artificial curiosity.

-- We are trying to teach the robot how to teach itself

-- A baby curiously explores its world.

--- When it learns how the world works, it more and more becomes a general problem solver

--- That is exactly how our systems learn. They ask themselves questions, instead of just answering ours.

-- You have to give AI the freedom to invent tasks for itself

--- If you don't do that it's not going to become very smart

--- On the other hand it's difficult to predict what it will do.

Ilya Sutskever - head of research, OpenAI

He is a creative intellectual involved in one of the major breakthroughs in 'deep learning' and AI.

He was hired by Elon Musk to head OpenAI, one of the most prevalent science labs for AI.

-- For me, technology is like a force of nature

-- There are many parallels between human evolution and technology

-- Playing god. Scientists have been accused of playing god before.

-- But now we are aware that we are creating something that is very different from everything we've done so far.

I've had an early interest in machine learning.

- what is experience

- what is learning

- what is thinking

- how does the brain work?

Those are philosophical questions, but it seems we can make algorithms that help answering them.

Its like applied philosophy

AGI (definition)

- A computer system that is capable of completing each and every activity or task of a human being, only better.

- We are certainly going to create completely autonomous beings with goals of their own.

- Since these creatures will be more intelligent than we are, it's very important that their goals be aligned with ours.

- We are trying to conduct an open AI.

- We want to be at the forefront of AI science

-- We want to head AI science and its starting conditions

-- So that future will be good for humans

- AI is great. It will solve all the problems we have today, like

-- employment

-- disease

-- poverty

- But it will also create new problems, like

-- I believe the fake news problem is going to be a million times worse

-- Cyber attacks are going to be more extreme

-- there will be completely automatic AI weapons

-- we believe that AI has the potential to create an endless stable dictatorship

-- 10 or 15 years from now there will be a multitude of intelligent systems

--- it's highly probable that those systems will have incredible impact on our society

--- will humans benefit?

--- who will benefit, and won't?

Michael Kosinski

- was labeled the most controversial data scientist of his time.

- His pioneering science significantly inspired the methods used by companies like Cambridge Analytica

How tall would a tower be, if you printed and stacked all of humanity's data produced on a single day?

- for times to the sun and back again

- in 2020 we will produce 62GB of data per person per day

- In our life we leave large quantities of data.

- They provide the computer algorithms with a pretty good insight of

-- who we are

-- what we want

-- what we do

In my work, I've looked at different kinds of digital footprints.

- Facebook Likes

- speech

- credit card analyses

- browser histories

- search histories

-- Each time I found out that with enough data, you can make precise forecasts of persons'

- future actions

- and find out important intimate details about their lives

One can easily use that data to manipulate people.

Facebook relays 2 billion pieces of information every day, or more.

When you change Facebook's algorithm only slightly, you can sway opinions, and therefore, the vote of millions. (pictures of Brexit shown)

A politician cannot find out, which of his messages is liked by each of his voters.

But a computer can see which political message would broadly convince people.

Alexander Nix – CEO Cambridge Analytica (CA)

Seminar or convention: 'It's my privilege to speak to you about the power of big data.'

CA's algorithm secretly collected the private data of 50 million Facebook users. (pics of Trump election are shown)

CA was hired by D. Trump's election campaigners to gather information suitable to influence American voters.

That way they said they could assess the personality of each American citizen.

Christopher W. - former CA employee, whistle blower

- we were working on apps that allowed us to siphon off data and analyze with algorithms

-- that allowed us to assess the details of people's personality

-- and their psychological qualities or mental weaknesses, like

--- openness

--- conscientiousness

--- extraversion

--- agreeableness

--- neuroticism

Michael Kosinski (Stanford University):

- CA has always said that their algorithms are based on my work.

- But CA is only one of 100 companies who target voters using such methods.

- Journalists ask me questions like: 'how does it feel to know that you made Trump's election possible and supported Brexit.

- How do you answer such questions

- I guess I'll have to live with the fact that I am made responsible for all of it.

Kara Swisher - Tech journalist

- Technology started out as a democratizing force

- It gave people the opportunity to communicate without a watchdog

- There has never been a bigger communication experiment in the history of mankind

-- What happens when everybody is entitled to say something?

-- You would assume that it would lead to humanity's good:

--- more democracy

--- more discussion

--- more tolerance

-- But instead those systems were hijacked (showing pictures of Mark Zuckerberg at a convention)

--- One company, Facebook, is responsible for the major part of society's communication.

--- The same thing is true of Google.

---- Everything we want to know about the world originates from them.

---- The worldwide information economy is being controlled by a handful of people.

Mark Zuckerberg - CEO Facebook (at a convention)

- We stand

-- for connecting every person

-- for a global community

Yobie Benjamin - Business Angel

- The world's richest organizations are tech companies.

-- Google, Apple, Microsoft, Amazon, Facebook

-- It's staggering how in the last decade the most successful companies predominantly trade electrons.

-- Those bits and bytes are the new currency

-- Data are constantly turned into money, even if we don't see that.

--- Google commands all kinds of data

---- they track people via GPS location

---- they know our search history

---- they are aware of our political attitude

-- Your search history reveals everything about you:

--- from your health status up to your sexual preferences

-- Google's reach is limitless

Zeynep Tufekci:

So we've seen how Google and Facebook became massive surveillance apparati.

Both are basically sellers of advertisements

It sounds trivial, but they're actually high-tech advertisers.

The reason why they are so profitable is that they employ AI to process everything about us,

then connect us with the ad sponsor, who wants to reach us for whichever message.

Kara Swisher

One problem with technology is that it is being developed to make us addicted.

Apps are developed in such a way that they absorb and occupy us.

At its base, they want to be a gaming machine for attention.

You give them your attention all day long, you're always checking your mobile.

Silvija Seres - Mathematician

When somebody controls what you read, they also control what you think.

You will then get more of what you've seen, because that will generate more clicks.

And you will get more related advertising.

But it will also keep you trapped in your very own echo chamber, which is responsible for the polarization we see today.

pictures of rallies for/against a cause (Bolsonaro for president 2018, fighting protesters, Neo-Nazi rally)

All over the world we see people who are polarized.

Conflicts are partly being triggered by algorithms.

Those algorithms have found out that conflicts engage people and that they foster tribal instincts that make people root for their preferred teams.

Social media's role in spreading hate (CNN report)

Social media could heighten the attention to hate-crime in all over the world.

It's about how people can be radicalized

Is this a key moment for the media moguls? Are they now capable of taking over the responsibility for what they communicate to the world?

When you globally employ a mighty piece of technology (like Google and Facebook), and then punt the world towards polarization, that harbors the potential for worldwide unrest)

pictures of people chanting 'white lives matter' and 'black lives matter' , a man with a T-shirt saying 'Radical Agenda', a woman fighting a man and vice-versa, a car driver randomly running over people

Ilya Sutskever - head of science, OpenAI

picture: Wears a hoodie saying 'creative destruction'.

About AGI

Imagine your smartest friend would exist a thousand times.

Each of them a thousand times as smart, but also a thousand times faster.

This means that he/she would master three years of thinking on one day.

Imagine how much you could do if you could master the workload of a year on a single day.

It is reasonable to say that AI is more exciting than all the quantum physics of the 20th century.

And back then they discovered nuclear power. I'm happy to be a part of this.

Many experienced experts for machine learning and AI are very skeptical about AGI.

About when it will come or whether it will become real at all.

Right now, only a small number of people have realized that the speed of neuronal networks for AI will be a hundred thousand times faster only a few years from now.

For a long time the hardware sector didn't know what to produce next.

But thanks to functioning neuronal networks, we now have a reason to build huge computers.

You can build a brain made of silicon. It's possible.

pictures of Google data center

The very first AGIs will be very big data centers with specialized neuronal net processors working in parallel.

A compact power-hungry package that will consume about 100 million times more power than an average household.

The very first AGIs will be extremely more efficient than human beings.

People won't be useful for the majority of the tasks at hand.

Why would you employ a person when you have a computer that will do the job faster and cheaper?

Without any doubt, AGI will be the most important technology in the history of Earth. By a huge margin.

It will be far more important than electricity, nuclear power and the internet put together.

One could even say that it is the ultimate goal of human science, the goal of computer science, the end game.

Building this is the endgame. It will be built and it will become a new life form. It will make us superfluous.

pictures showing scientific progress: radio, nuclear power, TV, rockets, computer hard- and software.

Stuart Russel - Computer scientist

Physical work has become nearly superfluous in the last century.

Mental routine work done by humans will become obsolete in the near future.

That's why we can observe jobs disappearing in the middle class.

TV-voice:

Once in a while a technology comes along that changes everything.

- Today Apple is reinventing the phone.

- Machine intelligence is all around us (pictures of Amazon Alexa on a family table)

Max Tegmark - The Future of Life Institute

- The list of things humans can do better than computers is becoming shorter every day.

Yobie Benjamin - Business Angel

- self-driving cars are great: They'll probably prevent accidents.

- But alongside with that 10 million jobs will be lost in the USA.

- What do you do with 10 million unemployed people?

Salil Shetty - Amnesty International

The risk for social unrest becomes very high when you aggravate social disparities.

Per definition, AGI can do all jobs better than we can.

Whoever says there will always be work that people can do better than machines, shuts their eyes to the facts of science and claims that there will never be a technological singularity.

It's like we're sitting in a high-speed train headed for a dark tunnel, and those on watch are sleeping.

pictures of homeless people. A writing says 'First They Came For The Homeless', a book titled 'Game Change'.

Michael Kosinski

- on a treadmill drinking coffee and simultaneously checking faces on a computer

- large parts of our digital foot print are digital pictures.

- as a psychologist, I'm particularly interested in digital pictures of faces.

- here you can see the differences between the faces of a straight and a gay person.

pictures of faces

- composite heterosexual faces

- composite gay faces

- average facial landmarks

- overlay: gay/straight

-- hetero men have a slightly more pronounced jaw

-- gay women have slightly bigger jaws

-- computer algorithms can find out:

--- our political convictions

--- our sexual orientation

--- or our intelligence

-- and that only by looking at pictures of our faces.

Michael Kosinski speaking at a seminar

Even human beings can keep apart a straight from a gay person.

Again it becomes clear that a computer can do that far more precisely.

What you see here is the precision of a vanilla face recognition software.

For homosexual people all over the world these are terrible news.

The same algorithms can be used to find out about other intimate characteristics, e.g.:

- whether you belong to the opposition

- whether you are liberal

- whether you are atheist

Being atheist is a crime punished with death in Saudi Arabia.

My mission as an academic is to warn people from the dangers.

The problem is that human beings very often choose to ignore bad news.

It's very different when you get death threats from one day to another. I received quite a few.

But as a scientist I must reveal what's possible.

What I'm interested in is what other characteristics can be read from faces.

- depression

- suicide intentions

-- then a surveillance camera at the station could save lives

What if one could forecast that a person has is prone to commit a crime?

You (his students) probably had counselors at school to determine whether a pupil is displaying behavioral problems.

Imagine you could predict with high probability that someone is prone to commit a crime in the future.

- based on his

-- language

-- face

-- his facial expressions

-- his likes on Facebook

I don't develop new methods. I just describe or test some of them in an academical context.

But it could of course be that by warning others I inspire others and give them new ideas.

Philip Alson - Human Rights lawyer

We haven't seen yet where our society is really going to evolve.

The tech companies want to collect every little piece of information in order to ensure their business thrives.

Police and military want the same to ensure safety.

The interests of the two strongly match.

That's why the cooperation of this 'military technological team play' is growing dramatically.

Lee Fang - Journalist, The Intercept:

The CIA has had a close relationship with Silicon Valley for a long time.

Its risk capital society Incutel provides investment money for startups that it thinks will develop groundbreaking new technology they can deploy.

- e.g.: The big data analysis company Palantir got funded by Incutel.

Peter Thiel - CEO Palantir

It's time to rebuild America.

Spencer Woodman - Investigative Journalist:

Trump was mainly elected because of his promise to curb immigration.

Palantir ingests huge amounts of data, e.g.

- where you live

- where you work

- who you know,

- who your neighbors are

- who belongs to your family.

- what places you've visited ever before in your life

- where you've lived before

- your social media profile

Palantir gets all of that and is remarkably good at structuring it

and sell it for

- criminal prosecution

- immigration agencies

- intelligence agencies

to find you and learn everything there is to know about you.

The number one striking fact about information society is that nobody is truly informed anymore.

Max Tegmark - The Future of Life Institute

We handing over to AI vital decisions regarding our lives.

Old-fashioned AI was fabricated by humans who knew how it worked.

But today powerful AI systems learn by themselves and we have no clue as to how they work.

That makes it very hard to trust them.

Ben Wizner - American Civil Liberties Union

This isn't some futuristic technology.

It's happening today.

AI influences where in your community a fire station or a school is built.

It decides whether you stay imprisoned or stay in jail.

It decides where the police will turn up.

It decides who gets monitored closely by the police.

Rumman Chowdry - data scientist

It's popular in the US to have anticipatory surveillance.

An algorithm finds out where a crime will take place.

Based on crime rates it is predicted where the police is sent.

We do know, however, that our society is biased about what ethnicities are predominantly involved in criminal offense.

Blacks an Latinos are stopped much more often than Whites.

This prejudice becomes part of the algorithm.

So the cops go to those neighborhoods and guess what they do there: They arrest people.

Data gathered from these arrests then flows back into the system and adds to the distortion.

That's called a feedback loop.

0:52:00

Philip Altson - Human Rights Advocate

Predictive policing in its extreme form will lead to experts saying 'show me your baby and I will tell you if it'll become a criminal.'

Now that we can predict that we will surveil those kids more closely and we're going to jump on them at the first signs. That's going to allow for a more effective policing, but it's also going to reinforce dramatically the existing injustices.

Ben Wizner - American Civil Liberties Union

Imagine a world where networks of surveillance and drone cameras use sophisticated face recognition and where they are coupled with other national databases. There will be respective technologies available that will trace all of our movements comprehensively and record them.

With that governments and mighty corporations will essentially be able to hit rewind on our lives.

We may be completely unsuspicious today, but in say five years they want to know more about us.

Then they can recreate granularly anything we've done, everyone we've seen, everyone we've been around during that period.

That's an extraordinary power to cede to anyone.

And it's a world that is hard for people to imagine.

We've already created the architecture to enable that.

Lee Fang

I'm a political reporter. And I'm very interested in how companies use their political power to influence the public policy process.

The large tech companies and their lobbyists meet behind closed doors and craft policies we have to live under.

That's true for surveillance policies, data gathering and increasingly when it comes to military and foreign policy.

Starting 2016 the defense ministry founded the Defense Innovation Board.

That's a special committee to bring the tech company bosses into closer contact with the military.

Eric Schmidt, the former CEO of Alphabet (Google's mother company) became the chairman of the Defense Innovation Board.

And one of his first priorities was to implement more artificial intelligence into the military.

Eric Schmidt – Former Google CEO of Alphabet (Google’s parent company)

"I've worked with a group of volunteers to take a look at innovation, overall military, and my conclusion is that we have fantastic people who are trapped in a very bad system."

Robert Work - Former US Undersecretary of Defense

From the Department of Defense's perspective the interesting part is about unmanned systems, and how robots and unmanned systems will change warfare.

The smarter you make those robots, the smarter the military could be.

Robert Work put together the interdisciplinary team for algorithmic warfare, better known as 'Project Maven'.

Eric Schmidt held a few speeches, talked to the media and essentially said it was designed to raise fuel efficiency and and to support military logistics. But behind closed doors there was another effort:

Late in 2017 Google Eric Schmidt asked Google to secretly contribute to Project Maven.

And that was to process the vast volume of images from drones in Iraq and Afghanistan and make them quickly identify targets on the battle field.

We have a sensor, that is capable of creating a full-motion video of an entire city.

And we had three teams with seven persons who were working constantly, but they could just process 15% of the information.

The other 85% remained in the editing room. That's why we said that AI and machine learning would help us 100% of the information.

Google for a long time had the motto 'don't be evil'. They have long showed an image of themselves that propagated them as an advocate of public transparency.

But when Google slowly transformed into a defense contractor, they exercised the utmost secrecy.

Google entered this project with almost all their staff, but they were left in the dark about it.

Tyler Breisacher - Former Google employee

Usually within Google everybody can know about any projects going on in other departments.

It was unsettling that this was kept secret with Project Maven because it was not the norm at Google.

When the story first revealed it caused a firestorm within Google.

Many staff members protested and signed a petition that they were rejecting to this work.

You have to say aloud that you don't want any part in this.

There are companies that are so-called defense partners and Google shouldn't count among them.

Google needs people's trust for it to work.

We have emails which prove that Google continued to deceive its staff and claimed that the project just be a minor thing, that it would yield only nine million dollars, which was just a drop in the ocean for a company like Google.

From those emails we know, however, that Google was expecting to expand the project to some 250 million dollars,

and that the contract would provide Google with government certification that would lead to further projects worth as much as 10 billion dollars.

Sundar Pichai - Google chief executive

The pressure for Google to grab government contracts comes at a time when Google's rivals also undergo a cultural change.

Amazon also applies for military and criminal prosecution projects.

IBM and other leading players, they are pitching the same market.

To stay competitive, Google has slowly transformed.

Robert Work - Former US Defense Undersecretary

The Defense Science Board said from all technological advancements AI and the autonomous systems arising from them are the most promising, are we investing enough?

Philip Altson - Human Rights Advocate

Once we develop autonomous deadly weapons, weapons that cannot be controlled at all, as a president you have to say to hell with national law. We have these weapons, we'll do what we want with them.

Liz O'Sullivan - Whistleblower at Project Maven

We are very close. When you have the hardware, all you have to do is flip a switch to make them autonomous.

What's keeping someone from it?

You really have to be scared by the thought that war would happen at the speed of machines.

What if you're a machine that played through millions of war scenarios?

What will happen when a team of drones that got assigned half the control works together in real time?

What will happen when such a swarm is told to conquer a city?

How will they take over that city?

The truth is it's safe to say that we won't know until it happens.

Robert Work – Former US Undersecretary of Defense

We don't want an AI system to decide what humans it would attack.

But we are dealing with authoritarian competitors. So that authoritarian regime will have less scruples to delegate decisions to a machine.

So how that plays out remains to be seen.

Silvija Seres - Mathematician

It's almost the gift of AI that it forces us collectively to think about the fact of what it means to be human.

What can I as a human do better than a super smart machine?

First we create a technology, then this technology recreates us from scratch.

We must make sure that nothing of that is overlooked which makes us so beautifully human.

Tobias Rees - Philosopher, Berggruen Institute

Once we build intelligent machines, the philosophical vocabulary we use to talk about us as humans leaves us.

When thinking of a list of words to describe what makes us human, it turns out those aren't many.

- culture

- history

- sociality

- maybe politics

- civilization

- subjectivity

All of these terms are grounded in two concepts:

- that humans are more than animals

- and that humans are more than mere machines.

But if machines really start thinking, there is a large set of philosophical questions that revolve around questions like

- who we are

- what is our spot in the world

- what is the world

- how is it structured

- are the categories we have relied on still valid, are they wrong?

Max Tegmark - The Future of Life Institute

Many people believe that intellectual capacity is something mysterious that can only exist in a biological body.

But intelligence is information processing. It doesn't matter if intelligence arises from carbon cells in brains of people, or in silicon atoms and computers.

Part of the success of AI is the fact that it adopted efficient ideas from the evolutionary process.

We have found out that neurons in a brain are intricately connected. So we stole this idea and transferred it to neuronal networks and computers.

And that's what revolutionized AI.

If one day we have AGI, AI can then per definition also enhance the program code written by AI.

This means that in a follow-up process AI will not be dominated by human programmers, but by AI programming itself.

Recursive, self-improving AI leaves human intelligence far behind and creates a super intelligence.

This would be the last invention we would ever have to make.

AI can invent everything else then by far faster than we could.

Brad Smith, President of Microsoft Corporation @RISE Conference

There is a future that we all need to talk about.

Some of the fundamental questions about the future of artificial intelligence.

Not just where it's going, but what it means for society to go there.

It is not what computers can do, but what computers should do.

As the generation that is bringing AI to the future, we are the generation that will answer this question first and foremost.

Ben Goertzel - Former Chief Scientist Hanson Robotics

We haven't created a machine yet that is able to think similar to a human being.

But we are approaching that stage.

Maybe it will take us five years to have AI on a human level, maybe it will take us 50 or 100.

Compared with the history of humanity, this is very soon.

The AI field is very international. China is on the fast lane, a harsh competitor to the USA and Europe.

It equips its AI with lots of CPU power and gathers vast amounts of data, to help AI learn.

There is a new generation of Chinese scientists and no one knows from where the next innovation is going to come from.

Zen Liang - Business Angel

When will China surpass Silicon Valley?

China always wanted to be a superpower.

The Chinese think that AI is the chance to become one of the most advanced countries in terms of economy and technology.

It's like they set up a banner saying 'AI is the area to go to'. So Chinese companies jump to it.

Chinese tech giants like Baidu, Tencent or Alibaba have invested a lot of money in AI.

That's why Chinese AI development is booming.

Yi Xin Cheng - AI computer scientist

In China everybody pays with Alipay and Wechatpay. Mobile currencies are everywhere.

That makes it easy to analyze your spending habits, like your credit rating.

Face recognition is widespread in China, for instance on airports or train stations.

In the future, maybe in a few months from now, you won't need a ticket to board a train. You'll use your face.

Xie Yinan - Megvii / Face++

We are creating the world's largest platform for facial recognition.

300.000 coders are using our platform.

A large portion of this are camera apps for selfies.

They let you appear prettier.

There are millions and millions of cameras on the planet.

From my perspective, every camera is a data generator.

In a machine eye the face gets decomposed into its main features and turns into a piece of code.

This way we can determine how old somebody is, male or female, and the feelings of that person.

Shopping is about what people look at. We can follow your eyeballs.

When they concentrate on a product, we can track that.

Therefore we know what kinds of people prefer which kind of product.

Wenyuan Dai - Chief executive 4Paradigm

Our mission is to create a platform for millions of developers in China.

We assess all data we can get hold of.

Not only user profiles, but what people presently do and where they are located right at that moment.

The platform will be so valuable that we care little about profit now, because there will be profit.

The social credit system is only one of the possible applications.

Sophie Richardson - Human Rights Watch

The Chinese government uses an array of different technologies.

Among them are AI, big data platforms and face recognition, speech recognition.

Mainly to monitor what the population is up to.

I think the Chinese government has made something very clear.

They want to gather an incredible amount of data about their population to socially engineer a dissent-free society.

The logic behind the social credit system is similar to a loan which checks if you are creditworthy,

and adding to it a very political dimension.

Like are you a trustworthy human being?

What did you say online?

Were you critical of the authorities?

Do you have a criminal record?

All that information is gathered to judge whether someone performed well in their view.

Then you get easier access to public payments and privileges.

If you haven't done very well, you are going to be penalized or restricted.

There is no way people can challenge those ratings.

In most cases they don't even know the status of their creditworthiness until they try to access public help.

Or when they try to buy a plane ticket, or apply for a passport, or want to enroll their child at a school.

Only then will they realize what social category they were put into.

And that they have to expect negative consequences as a result.

We spent the better part of the last one or two years with the abuse of surveillance technology across China.

It has often taken us to Xinjiang in the northwestern region of China, where more than half of the population are Turkic and Muslim people.

Uighurs, Kazakhs, Hui.

This is a region in China where its population has long been regarded as suspect or illoyal.

We got hold of information about a platform called 'integrated platform for joint operations'.

It's a police program generating lists of persons that must be subjected to a political re-education.

A number of our interview partners who talked about the political re-education camps in China reported a picture of an incredible surveillance state.

It's a region flooded with surveillance cameras for face recognition.

Check points, body scanners, QA-codes outside people's homes.

It's the stuff utopian films are made of and you start thinking 'that would be a creepy world'.

13 million people of the Muslim Turks are living in this kind of reality right now.

CBSN news cast

The Intercept (an online news publication of First Look Media, owned by Pierre Omidyar)

The Intercept reports that Google is planning to launch a censored search engine in China.

Google's search for new markets led them to China, despite Peking's strict censorship.

CNN Live news cast

Google scientist resigned over censored search engine project

Tell us more about why you saw it as your moral duty to resign from your position at Google.

You talk about complicity to oppression, censorship and surveillance.

Jack Poulson - Former Google Senior Scientist - whistleblower

A Chinese joint venture company was founded to set up Google's work in China.

Which begs the question to what extent that company has access to black lists and surveillance data.

The fact that Google refuses to respond to comment to human rights organizations should be extremely disturbing to everyone.

Du to my conviction that varying opinions are vitally important to democracy, I had to quit my job at Google.

I want no part in undermining the protection of dissidents.

The UN has reported about 200.000 to 1 million Uighurs having disappeared into re-education camps.

There is solid evidence that Google is complicit in that setting up a censored search engine in China.

Dragon Fly is a project made to monitor the Chinese search engine according to the interests of the Chinese government.

That includes the censorship of sensitive content like human rights, about political representatives, and information about student protests.

And that is only a small part of the project.

Much more disturbing is the surveillance made possible by it.

I mentioned the facts to my friends and colleagues, but they all said 'I don't know anything'.

And when there was finally a meeting about it, they did not even mention the serious concerns associated with it.

Thereupon I handed in my resignation.

Not only to my superiors, but I sent it company-wide.

That's the letter I read you from.

Personally, I haven't slept well. I had terrible headaches and woke up in the middle of the night just sweating.

With that said, the global response to this is good.

Engineers should demand to know how their work is going to be used.

They should be included in ethical decisions.

Most people do not understand what a very big prescriptive technology means.

When the work is being split up into multiple parts and the individual can overlook only a tiny bit,

it's impossible to see how everything plays together.

It's advisable to draw a parallel to the work of physicists and their work on the atomic bomb.

It's exactly the community where I'm out of.

I never was a nuclear scientist. I was working in the field of applied mathematics.

For my thesis I received financial support for instructing persons to work in weapons labs.

One can definitely say that this is about an existential threat, that whoever dominates AI will lead militarily.

Robert Work - Former US Undersecretary of Defense

China fully expects to surpass the USA as the dominant economic player.

And it believes that AI will speed up the process.

They also see themselves capable of getting ahead of the USA as a military power.

Their plan is simple:

Until 2020 they want to catch up to the USA in these technologies.

Till 2025 they want to overtake the USA.

Till 2030 they want to be the world leader in AI and autonomous technologies.

That's the national master plan. It is backed by at least 1.5 billion dollars of investment funds.

It is definitely a race.

Jürgen Schmidhuber - Director of Research - NNAISENSE

AI is a little bit like fire.

Fire was invented 700.000 years ago and has its pros and cons.

People realized you can use fire to cook and clean.

But they also realized that one can kill other people with it.

Like common fire, AI has the ability to become a wildfire.

But the advantages outweigh the disadvantages so much, that the development can't be stopped.

Europe is waking up.

Many companies are realizing that the next AI wave is going to be much bigger than the current wave.

The next wave of AI will be about robots.

All those machines producing things built by other machines are going to become smart.

In the not so distant future there will be robots we can teach like children.

Fore example I will talk to a little robot and tell him:

' Look here, robot, look here. Let us assemble a smartphone.

We'll take a slab of plastic and a screwdriver and put the smartphone together.

No, not like this, like THIS.'

And he will fail a couple of times, but gradually he will learn to do it better than me.

And then he has learned how to do it and we will produce a million pieces of it and sell them.

Regulation of AI sounds like a good idea.

But I don't think this is possible.

One of the reasons for this is scientists' curiosity.

They don't care about regulations.

Military powers do not care about regulations, either.

They will say: 'If we Americans won't do it, then the Chinese will do it.'

And the Chinese say: 'If we don't do it, the Russians will do it.'

Not matter what regulations exist the military industrial complexes will no doubt ignore them because they want to avoid falling behind.

A number of news reports:

A computer-generated person: 'Welcome to Xinhua. I am the world's first AI news anchor. I was created by Xinhua and Sogou.

CNN report

AI text generator too dangerous to make public

A program by the company OpenAI can write coherent and credible stories just like human beings.

C|net report

It's one small step for one machine, a giant leap for the entirety of machines.

IBM's new AI project Debater can take it up with experienced debaters and won a live debate.

GMA (Good Morning America) cover story

Women targeted by "Deep fakes" - New technology being used to harass people

So-called deep fakes are being used to edit women's faces in porn videos.

Hao Li - Chief executive Pinscreen Inc., China

AI is evolving at an insane speed.

In some ways we are only at the beginning.

You have so many different applications.

It's a gold mine.

1:26:33

When deep learning rocked the AI community in 2012, we were among the first to use it on computer graphics.

A lot of our research is being funded by the government, the military and by intelligence services.

We need a source and a target to produce photo-realistic output.

Then we can do a face replacement.

(We see Hao Li's face replaced with various celebrities' faces on the fly, including Emma Stone and Xi Jinping)

Later on you can make it look like someone did something when they actually did not.

It can be used creatively to produce something funny.

But of course it can also be used to produce fake news.

That can be dangerous. With the technology in the wrong hands this can get out of hand very quickly.

Barack Obama

We're entering an era in which our enemies can make it look like anyone is saying anything at any point in time.

Even if they would never say those things.

Moving forward we need to be more vigilant with what we trust from the internet.

It may sound basic, but how we move forward in an age of information is going to be the difference whether we survive or whether we become some kind of FUCKED UP DYSTOPIA.

(Obama's alleged speech is used to reveal a perfect fake fabricated by AI technology.)

Michael Kosinski - Expert for computer-based psychology

(walking on a treadmill and simultaneously working on a computer.)

One critique expressed with regard to my work is saying that phrenology or physiognomy were silly ideas from the past.

There were people who claimed that you could predict a person's character just based on their face.

People would say this is humbug and we know that it is only some kind of badly concealed racism and superstition.

But the fact that someone in the past made an assumption and then drew the wrong conclusions doesn't automatically invalidate the claim.

Of course, people should have privacy, for instance when it comes to sexual orientation or political views.

But I'm positive that in a technological environment this is essentially impossible.

People should realize that there is no way back.

We cannot run away from the algorithms.

The sooner we accept the inevitable and inconvenient truth that privacy does not exist anymore, the sooner we can start thinking about how our societies will look like in the post-privacy age.

Vera Journova (EU commissioner)

Whilst speaking about face recognition I cannot help but think of the darkest moments in human history,

when people had to live in a system that accepted one part of the population and sentenced the other to death.

What would Mengele do with such tools?

It would enable a fast and efficient selection.

And this is an apocalyptic notion.

Eleonore Pauwels - United Nations University

So in the near future all of humanity will exist in a gigantic collection of interlinked databases filled with faces, genomes, conduct and emotion.

Then you have an online avatar of your self recording how well you fulfill your duties as a citizen, which relationships you have, what your political attitude is and your sexual orientation.

Based on all this data the algorithms are capable of influencing our behavior and changing the way we think.

And further in the future they will probably also be able to change the way we feel.

Ilya Sutskever - Head of research, OpenAI

The beliefs and desires of the first AGIs will be vital.

So it's important they are programmed correctly.

If this doesn't happen, evolution, natural selection will favor those systems that prioritize their own survival above ours.

It's not that they will actively hate us and cause harm to humanity, but they will simply be too powerful.

I think a good analogy is how humans treat animals.

It's not that we hate them. People love animals and have affectionate relationships with them.

But when a new highway has to be built between two cities, we don't ask the animals for permission.

We do it because it is important to us.

I think this will be per default the relationship between us and AGIs, which will operate fully autonomous on their own behalf.

If we have an arms race between a few kings about who will build the first AGI, there will be little time to make sure that the AGI they build will care deeply for humans.

There will be an avalanche, an avalanche of AGI development.

Imagine a huge unstoppable force.

And I think it's likely the Earth will be covered with solar cells and data centers.

Given these kinds of concerns it's important that building AGIs is an operation between multiple countries.

The future looks bright for machines, it would be nice if that could be true for humans as well.

Jürgen Schmidhuber – Lead Researcher NNAISENSE

Is there a lot of responsibility on my shoulders?

Not really.

Was there a lot of responsibility on the parents of Albert Einstein?

Their parents somehow made him but they couldn't foresee what he would do and how he would change the world.

That's why one cannot say they are responsible.

I am not very human-centered.

I believe I am just a small gear in the universe's evolution toward higher complexity.

But I'm also aware that I am not the crown of creation and that humanity isn't either.

We set the stage for something that is bigger than us and that will transcend us.

And then we'll go where humans can't go and we'll transcend the entire universe, or at least the reachable part of it.

I find beauty in this and awe, seeing myself as part of a much grander scheme.

Yobie Benjamin - Business Angel

AI is inevitable. We need to make sure we have the necessary regulations to avoid weaponization of AI.

We need no further weaponization of such a powerful tool.

Rumman Chowdry - Data scientist

One of the most critical things I think is the need for international regulation.

Because there is an imbalance in power.

There are companies who have more power than many countries.

How can we guarantee that people are being heard?

Philip Altson - Human Rights Advocate

There can't be a law-free or human rights-free zone.

We cannot embrace all those powerful new technologies without bringing with us the package of human rights that we fought for so hard and to achieve and that remain so fragile.

Max Tegmark - The Future of Life Institute

AI is neither good nor bad.

It will reinforce the wishes and desires of those who control it.

AI today is in control of a very small group of people.

The most important question we need to ask ourselves does not require technical knowledge.

It’s the question of what kind of future society we want to create with all this technology we're making.

How do we picture the role of humanity in this world?

Closing Credits

Google and Facebook refused to give interviews for this film.


ray

Other members of the 12bfree family

12bfree NextCloud 12bfree ICT-Documentation 12bfree Survey Tool 12bfree eLearning