Elon Musk on AI and OpenAI
Subtitle language:
00:00 → 00:14
Good evening and welcome to Tucker Carlson's Happy Monday.
00:14 → 00:19
Artificial intelligence is one of those topics that's just spooky and sci-fi enough to make
00:19 → 00:21
for a compelling television segment.
00:21 → 00:23
They love it on the morning shows.
00:23 → 00:28
But at the same time, AI is complex enough that it's easy to misrepresent.
00:28 → 00:33
It sounds like something that could be revolutionary, even dangerous to humanity.
00:33 → 00:34
But is it?
00:34 → 00:37
And if it is, what should we do about it?
00:37 → 00:41
Those questions are significant enough that we wanted to find someone who could provide
00:41 → 00:43
a definitive answer.
00:43 → 00:45
Elon Musk seemed like the right person.
00:45 → 00:50
Musk has been thinking about AI and worrying about it for most of his life.
00:50 → 00:56
Nearly a decade ago, he helped found a non-profit research project called OpenAI.
00:56 → 00:58
The point was in the name.
00:58 → 01:03
If we're going to have artificial intelligence, and apparently we are, it ought to be open,
01:03 → 01:04
open to the world.
01:04 → 01:08
That would help ensure that it's used for good and not evil.
01:08 → 01:09
That was the idea.
01:09 → 01:14
But as the years passed and Musk found himself preoccupied building a couple of enormous
01:14 → 01:19
companies, SpaceX and Tesla, OpenAI got away from him.
01:19 → 01:22
As of tonight, OpenAI is no longer open.
01:22 → 01:27
It's not a non-profit research project dedicated to using artificial intelligence to serve
01:27 → 01:28
humanity.
01:28 → 01:33
It is instead a commercial enterprise backed by Microsoft and controlled to some extent
01:33 → 01:35
by the Democratic Party.
01:35 → 01:38
Elon Musk thinks that's a problem.
01:38 → 01:43
In fact, he believes it's a threat to human civilization, tantamount to maybe even more
01:43 → 01:46
terrifying than thermonuclear weapons.
01:46 → 01:51
The conversation you're about to see took place recently in a hotel room in Los Angeles.
01:51 → 01:54
We think it's important enough that we're going to play the entire thing for you over
01:54 → 01:57
the course of tonight and tomorrow.
01:57 → 01:59
Here's how the conversation began.
01:59 → 02:03
So all of a sudden AI is everywhere.
02:03 → 02:06
People who weren't quite sure what it was are playing with it on their phones.
02:06 → 02:07
Is that good or bad?
02:07 → 02:08
Yeah.
02:08 → 02:12
So I've been thinking about AI for a long time, since I was in college, really.
02:12 → 02:16
It was one of the things that, just sort of four or five things I thought would really
02:16 → 02:19
affect the future dramatically.
02:19 → 02:24
It is fundamentally profound in that the smartest creatures, as far as we know, on this earth
02:24 → 02:29
are humans, is our defining characteristic.
02:29 → 02:36
We're obviously weaker than, say, chimpanzees and less agile, but we are smarter.
02:36 → 02:44
So now what happens when something vastly smarter than the smartest person comes along
02:44 → 02:45
in silicon form?
02:45 → 02:49
It's very difficult to predict what will happen in that circumstance.
02:49 → 02:50
It's called the singularity.
02:50 → 02:56
It's a singularity like a black hole, because you don't know what happens after that.
02:56 → 02:57
It's hard to predict.
02:57 → 03:01
So I think we should be cautious with AI.
03:01 → 03:10
And I think there should be some government oversight, because it's a danger to the public.
03:10 → 03:18
And so when you have things that are a danger to the public, like let's say food and drugs.
03:19 → 03:27
That's why we have the Food and Drug Administration and the Federal Aviation Administration, the FCC.
03:27 → 03:36
We have these agencies to oversee things that affect the public, where there could be public harm.
03:36 → 03:44
And you don't want companies cutting corners on safety and then having people suffer as a result.
03:45 → 03:52
So that's why I've actually for a long time been a strong advocate of AI regulation.
03:52 → 03:59
So that I think regulation is, you know, it's not fun to be regulated.
03:59 → 04:03
It's sort of arduous to be regulated.
04:03 → 04:10
I have a lot of experience with regulated industries, because obviously automotive is highly regulated.
04:10 → 04:16
You could fill this room with all the regulations that are required for a production car just in the United States.
04:16 → 04:20
And then there's a whole different set of regulations in Europe and China and the rest of the world.
04:20 → 04:25
So I'm very familiar with being overseen by a lot of regulators.
04:25 → 04:28
And the same thing is true with rockets.
04:28 → 04:36
You can't just willy-nilly shoot rockets off, not big ones anyway, because the FAA oversees that.
04:36 → 04:45
And then even to get a launch license, there are probably half a dozen or more federal agencies that need to approve it, plus state agencies.
04:45 → 04:50
So I've been through so many regulatory situations, it's insane.
04:50 → 04:59
And sometimes people think I'm some sort of regulatory maverick that defies regulators on a regular basis.
04:59 → 05:02
But this is actually not the case.
05:02 → 05:08
So once in a blue moon, rarely I will disagree with regulators.
05:08 → 05:13
But the vast majority of the time, my companies agree with the regulations and comply.
05:13 → 05:22
So I think we should take this seriously, and we should have a regulatory agency.
05:22 → 05:29
I think it needs to start with a group that initially seeks insight into AI,
05:29 → 05:35
then solicits opinion from industry, and then has proposed rulemaking.
05:35 → 05:45
And then those rules will probably, hopefully grudgingly, be accepted by the major players in AI.
05:45 → 05:56
And I think we'll have a better chance of advanced AI being beneficial to humanity in that circumstance.
05:56 → 05:58
But all regulations start with a perceived danger.
05:58 → 06:00
Planes fall out of the sky or food causes botulism.
06:00 → 06:01
Yes.
06:01 → 06:05
I don't think the average person playing with AI on his iPhone perceives any danger.
06:05 → 06:08
Can you just roughly explain what you think the dangers might be?
06:09 → 06:25
Yeah. So the danger, really, AI is perhaps more dangerous than, say, mismanaged aircraft design or production maintenance
06:25 → 06:31
or bad car production in the sense that it has the potential.
06:31 → 06:35
However small one may regard that probability, but it is non-trivial.
06:35 → 06:38
It has the potential of civilizational destruction.
06:38 → 06:41
There's movies like Terminator, but it wouldn't quite happen like Terminator,
06:42 → 06:45
because the intelligence would be in the data centers.
06:45 → 06:45
Right.
06:45 → 06:47
The robot's just the end effector.
06:48 → 06:54
But I think perhaps what you may be alluding to here is that regulations are really only
06:54 → 06:56
put into effect after something terrible has happened.
06:56 → 06:57
That's correct.
06:57 → 07:01
If that's the case for AI and we only put in regulations after something terrible has
07:01 → 07:04
happened, it may be too late to actually put the regulations in place.
07:04 → 07:05
The AI may be in control at that point.
07:06 → 07:09
You think that's real?
07:09 → 07:13
It is conceivable that AI could take control and reach a point where you couldn't turn it
07:13 → 07:17
off and it would be making the decisions for people.
07:17 → 07:18
Yeah. Absolutely.
07:19 → 07:20
Absolutely.
07:20 → 07:22
No, that's definitely where things are headed.
07:23 → 07:24
For sure.
07:26 → 07:32
I mean, things like, say, Chats GPT, which is based on GPT-4 from OpenAI, which is a
07:32 → 07:38
company that I played a critical role in creating, unfortunately.
07:38 → 07:40
Back when it was a non-profit?
07:41 → 07:41
Yes.
07:43 → 07:51
I mean, the reason OpenAI exists at all is that Larry Page and I used to be close friends
07:51 → 07:55
and I would stay at his house in Palo Alto and I would talk to him late into the night
07:55 → 07:57
about AI safety.
07:58 → 08:03
At least my perception was that Larry was not taking AI safety seriously enough.
08:05 → 08:06
What did he say about it?
08:06 → 08:13
He really seemed to be what it once was, sort of a digital super intelligence, basically
08:13 → 08:17
digital god, if you will, as soon as possible.
08:17 → 08:18
He wanted that?
08:19 → 08:19
Yes.
08:19 → 08:25
He's made many public statements over the years that the whole goal of Google is what's
08:25 → 08:28
called AGI, artificial general intelligence or artificial super intelligence.
08:29 → 08:34
And I agree with him that there's great potential for good, but there's also potential for bad.
08:34 → 08:41
And so if you've got some radical new technology, you want to try to take a set of actions that
08:41 → 08:45
maximize probably it will do good and minimize probably it will do bad things.
08:45 → 08:46
Yes.
08:46 → 08:47
It can't just be health or leather.
08:47 → 08:52
It's just go barreling forward and hope for the best.
08:52 → 08:57
And then at one point I said, well, what about, we're going to make sure humanity's OK here.
09:00 → 09:04
And then he called me a specious.
09:07 → 09:09
Did he use that term?
09:09 → 09:09
Yes.
09:10 → 09:11
And there were witnesses.
09:11 → 09:14
I wasn't the only one there when he called me a specious.
09:14 → 09:16
And so I was like, OK, that's it.
09:17 → 09:18
Yes, I'm a specious.
09:18 → 09:19
OK, you got me.
09:21 → 09:21
What are you?
09:24 → 09:25
Yeah, I'm fully a specious.
09:26 → 09:27
Busted.
09:28 → 09:31
So that was the last straw.
09:31 → 09:35
At the time, Google had acquired DeepMind.
09:35 → 09:39
And so Google and DeepMind together had about three quarters of all the AI talent in the
09:39 → 09:39
world.
09:39 → 09:44
They obviously had a tremendous amount of money and more computers than anyone else.
09:44 → 09:52
OK, we have a unipolar world here where there's just one company that has close to a monopoly
09:52 → 09:58
on AI talent and computers, like scaled computing.
09:58 → 10:01
And the person who's in charge doesn't seem to care about safety.
10:01 → 10:02
This is not good.
10:04 → 10:07
So then I thought, what's the furthest thing from Google?
10:07 → 10:12
Would be like a nonprofit that is fully open because Google was closed for profit.
10:12 → 10:18
So that's why the open and open AI refers to open source transparency so people know
10:18 → 10:19
what's going on.
10:19 → 10:20
Yes.
10:20 → 10:25
And that we don't want to have like a while I'm normally in favor of for profit.
10:25 → 10:29
We don't want this to be sort of a profit maximizing demon from hell.
10:29 → 10:29
That's right.
10:30 → 10:31
That just never stops.
10:31 → 10:32
Right.
10:32 → 10:34
So that's how open AI was.
10:34 → 10:36
You want specious incentives here.
10:36 → 10:37
Incentives that.
10:37 → 10:40
Yes, I think we want to pro-human.
10:40 → 10:40
Yeah.
10:40 → 10:42
Like the future good for the humans.
10:42 → 10:43
Yes.
10:43 → 10:45
Yes, because we're humans.
10:45 → 10:46
So can you just put it?
10:46 → 10:48
I keep pressing, but just just for people who haven't thought this through and aren't
10:48 → 10:55
familiar with it and the cool parts of of artificial intelligence are so obvious, you
10:55 → 10:58
know, write your college paper for you, write a limb wreck about yourself.
10:58 → 11:01
There's a lot there that's fun and useful.
11:02 → 11:06
Can you be more precise about what's potentially dangerous and scary?
11:06 → 11:08
Like what could it do?
11:08 → 11:10
What specifically are you worried about?
11:11 → 11:12
I'm going with old sayings.
11:12 → 11:13
The pen is mightier than the sword.
11:14 → 11:23
So if you have a super intelligent AI that is capable of writing incredibly well and
11:23 → 11:30
in a way that is very influential, you're convincing and then and is and is constantly
11:30 → 11:35
figuring out what is what is more and what is more convincing to people over time and
11:35 → 11:42
then enter social media, for example, Twitter, but also Facebook and others, you know, and
11:42 → 11:46
potentially manipulates public opinion in a way that is very bad.
11:47 → 11:48
How would we even know?
11:50 → 11:51
How do we even know?
11:52 → 11:58
So to sum up in the words of Elon Musk, for all human history, human beings have been
11:58 → 12:00
the smartest beings on the planet.
12:01 → 12:05
Now human beings have created something that is far smarter than they are.
12:05 → 12:09
And the consequences of that are impossible to predict.
12:09 → 12:12
And the people who created it don't care.
12:12 → 12:17
In fact, as he put it, Google founder Larry Page, a former friend of his, is looking to
12:17 → 12:22
build a, quote, digital god and believes that anybody who's worried about that is a species.
12:22 → 12:26
In other words, is looking out for human beings first.
12:26 → 12:31
Elon Musk responded as a human being, it's OK to look out for human beings first.
12:32 → 12:36
And then at the end, he said the real problem with AI is not simply that it will
12:36 → 12:39
jump the boundaries and become autonomous and you can't turn it off.
12:39 → 12:44
In the short term, the problem with AI is that it might control your brain through words.
12:45 → 12:49
And this is the application that we need to worry about now, particularly going into the
12:49 → 12:51
next presidential election.
12:51 → 12:54
The Democratic Party, as usual, was ahead of the curve on this.
12:54 → 12:58
They've been thinking about how to harness AI for political power.
12:58 → 12:59
More on that next.
13:01 → 13:12
In the long term, AI may become autonomous and take over the world, but in the short
13:12 → 13:17
term, it's being used by politicians to control what you think, to end your independent judgment
13:17 → 13:20
and erase democracy on the eve of a presidential election.
13:20 → 13:22
Elon Musk is very worried about that.
13:23 → 13:25
He told us about his plan to stop it.
13:28 → 13:30
What's happening is they're training the AI to lie.
13:31 → 13:31
It's bad.
13:31 → 13:32
To lie.
13:32 → 13:33
That's exactly right.
13:33 → 13:34
And to withhold information.
13:34 → 13:36
To lie and yes.
13:39 → 13:39
Yeah, exactly.
13:39 → 13:46
To either comment on some things, not comment on other things, but not to say what the data
13:48 → 13:50
actually demands that it say.
13:50 → 13:50
Exactly.
13:53 → 13:54
How did it get this way?
13:54 → 13:56
I thought you funded it at the beginning.
13:56 → 13:57
What happened?
13:57 → 13:58
Yeah, well, that would be ironic.
13:59 → 14:01
The most ironic outcome is most likely it seems.
14:04 → 14:05
I'm stealing that.
14:05 → 14:06
That's good.
14:06 → 14:08
That's actually a friend of mine, Jonah, came up with that one.
14:08 → 14:11
I actually have a slight variant on that, which is the most entertaining outcome is the most
14:11 → 14:15
likely, but that's entertaining as viewed from a third party viewer.
14:16 → 14:18
So if we're like an alien TV show.
14:18 → 14:18
Yes.
14:20 → 14:24
You go see a movie about World War I and they're being blown to bits and gassed and everything
14:24 → 14:27
in the trenches and it's like you're eating popcorn and having a soda.
14:28 → 14:30
Not so great for the people in the movie.
14:30 → 14:30
True.
14:30 → 14:31
This is an Occam's razor.
14:31 → 14:36
The simplest explanation is most likely Jonah's variant, which is irony.
14:36 → 14:43
And then my variant, which is the most entertaining as seen by a third party audience,
14:44 → 14:45
which seems to be mostly true.
14:46 → 14:47
But it seems true in this case.
14:47 → 14:49
So you gave them, did you give them a lot?
14:49 → 14:55
I came up with the name and the concept and pushed, had a number of dinners around the Bay
14:55 → 15:01
area with some of the people, the leading figures in AI.
15:03 → 15:05
And I helped recruit the initial team.
15:06 → 15:15
In fact, Ilya Sutskaya, who was really quite fundamental to the success of OpenAI,
15:19 → 15:23
I put a tremendous amount of effort into recruiting Ilya and he changed his mind a few times and
15:23 → 15:25
ultimately decided to go with OpenAI.
15:25 → 15:28
But if he had not gone with OpenAI, I would not have succeeded.
15:29 → 15:34
I really put a lot of effort into creating this organization to serve as a counterweight to Google.
15:35 → 15:37
And then I kind of took my eye off the ball, I guess.
15:37 → 15:46
And they are now closed source and they are obviously for profit and they're
15:48 → 15:50
closely allied with Microsoft.
15:50 → 15:55
You know, in effect, Microsoft has a very strong say, if not
15:58 → 16:01
directly controls OpenAI at this point.
16:01 → 16:03
So you really have an OpenAI and Microsoft situation.
16:03 → 16:08
And then Google DeepMind are the two sort of heavyweights in this arena.
16:09 → 16:12
So it seems like the world needs a third option.
16:13 → 16:13
Yes.
16:14 → 16:21
So I think I will create a third option, although starting very late in the game, of course.
16:22 → 16:23
Can it be done?
16:23 → 16:24
I don't know.
16:24 → 16:25
I think we'll see.
16:25 → 16:27
It's definitely starting late.
16:30 → 16:33
But I will try to create a third option.
16:35 → 16:39
And that third option hopefully does more good than harm.
16:39 → 16:44
The intention with OpenAI was obviously to do good, but it's not clear whether it's
16:44 → 16:46
actually doing good or whether it's...
16:47 → 16:54
I can't tell at this point, except that I'm worried about the fact that it's being trained
16:54 → 16:59
to be politically correct, which is simply another way of saying untruthful things.
16:59 → 17:01
So that's a bad sign.
17:02 → 17:06
There's certainly a path to AI dystopia, is to train an AI to be deceptive.
17:06 → 17:12
So yeah, I'm going to start something which I know you called TruthGBT, or a maximum truth
17:12 → 17:16
seeking AI that tries to understand the nature of the universe.
17:16 → 17:22
And I think this might be the best path to safety in the sense that an AI that cares
17:22 → 17:28
about understanding the universe is unlikely to annihilate humans because we are an interesting
17:28 → 17:29
part of the universe.
17:30 → 17:32
Hopefully, they would think that.
17:32 → 17:40
I think, because yeah, like humanity could decide to hunt down all the chimpanzees and
17:40 → 17:42
kill them, but we don't.
17:42 → 17:44
Because we're actually glad that they exist.
17:45 → 17:48
And we aspire to protect their habitat.
17:48 → 17:53
But we feel that way because we have souls and that makes us sentimental and reflective.
17:53 → 17:57
It gives us a moral sense, longings.
17:57 → 17:59
Can a machine ever have those things?
17:59 → 18:00
Can a machine be sentimental?
18:01 → 18:02
Can it appreciate beauty?
18:04 → 18:10
Well, I mean, we're getting into some philosophical areas that are hard to resolve.
18:13 → 18:18
You know, I take somewhat of a scientific view of things, which is that we might have
18:18 → 18:20
a soul or we might not have a soul.
18:20 → 18:20
I don't know.
18:21 → 18:26
It feels like we have, I feel like I've got some sort of consciousness that exists on
18:26 → 18:29
a plane that is not the one we observe.
18:30 → 18:32
That is certainly how I feel, but it could be an illusion.
18:32 → 18:33
I don't know.
18:34 → 18:44
But for AI, in terms of understanding beauty, it's a different form of appreciating beauty
18:44 → 18:48
and being able to create incredibly beautiful art.
18:49 → 18:52
Will AI be able to create incredibly beautiful art?
18:52 → 18:53
It already does.
18:53 → 18:53
Yes, I know.
18:53 → 18:57
If you see some of the Maggioni, this stuff, it's incredible.
18:57 → 18:57
It is.
18:58 → 19:07
So no question that it can create art that we perceive as stunning, really.
19:09 → 19:17
And it's doing still images now, but it won't be long before it's doing movies and shorts.
19:19 → 19:21
Movies are just a series of frames with audio.
19:21 → 19:27
But at that point, because it can mimic people and voices, any image, it can mimic reality
19:27 → 19:28
itself so effectively.
19:31 → 19:32
I mean, how could you have a criminal trial?
19:34 → 19:37
I mean, how could you ever believe that evidence was authentic, for example?
19:38 → 19:40
And I don't mean like in 30 years, I mean like next year.
19:41 → 19:46
I mean, that seems totally disruptive to all of our institutions.
19:46 → 19:47
But I'm not so worried.
19:48 → 19:56
I think it's more like, will humanity control its destiny or not?
19:56 → 20:00
Will we have a future that is better than the past or not?
20:03 → 20:06
Will humanity control its future or not?
20:07 → 20:11
And in the meantime, how will this be used to control us?
20:11 → 20:15
If you've played around with the latest versions of AI, you will learn
20:15 → 20:21
it prefers to end the world with nuclear Armageddon before you use naughty words.
20:21 → 20:22
So those are its priorities.
20:23 → 20:29
Elon Musk is a strong believer in free speech, so strong that he purchased Twitter, a company
20:29 → 20:34
he didn't need and hasn't profited from, as a way to restore free speech to the internet,
20:34 → 20:38
to bring us back, say, six years to the free country we lived in then.
20:38 → 20:44
And once he bought Twitter, he discovered it wasn't really a social media application.
20:44 → 20:48
It was really a tool for global intelligence agencies.
20:48 → 20:50
He's got to tell on that after the break.
20:55 → 20:58
Elon Musk bought Twitter because he used Twitter.
20:58 → 20:59
It's as simple as that.
20:59 → 21:03
And he was infuriated by Twitter's effort to silence people on the internet.
21:04 → 21:06
That's how strongly he believed in free speech.
21:06 → 21:10
He paid $44 billion and immediately lost tens of billions of dollars doing it.
21:10 → 21:15
And when he took over and looked behind the curtain, he discovered Twitter was really
21:15 → 21:20
a tool of the global intel agencies to spy on people and emit propaganda.
21:20 → 21:20
Here it is.
21:24 → 21:25
You bought Twitter famously.
21:25 → 21:29
You've got a lot of other businesses and a lot going on.
21:29 → 21:29
Yes.
21:29 → 21:32
You said you bought it because you believe in speech, free speech.
21:33 → 21:35
You've had a lot of hassle since you bought it.
21:35 → 21:36
In retrospect, was it worth buying it?
21:36 → 21:41
I mean, it remains to be seen as to whether this was financially smart.
21:42 → 21:43
Currently, it is not.
21:44 → 21:48
We just revalued the company at less than half of the acquisition price.
21:48 → 21:48
Did you really?
21:48 → 21:49
Yes.
21:49 → 21:49
Sorry.
21:53 → 22:02
No, my timing was terrible for when the offer was made because it was right before advertising
22:02 → 22:02
plummeted.
22:02 → 22:03
Yeah.
22:03 → 22:05
You caught the high watermark, I noticed.
22:05 → 22:05
Yeah.
22:05 → 22:06
Yeah.
22:06 → 22:07
So I must be a real genius here.
22:09 → 22:14
My timing is amazing since I've bought it for at least twice as much as it should have
22:14 → 22:15
been bought for.
22:16 → 22:18
But some things are priceless.
22:18 → 22:26
And so whether I lose money or not, that is a secondary issue compared to ensuring the
22:27 → 22:28
strength of democracy.
22:28 → 22:32
And free speech is the bedrock of a functioning democracy.
22:32 → 22:32
Yes.
22:32 → 22:38
And the speech needs to be as transparent and truthful as possible.
22:38 → 22:42
So we've got a huge push on Twitter to be as truthful as possible.
22:42 → 22:45
We've got this community notes feature, which is great.
22:45 → 22:45
It is great.
22:45 → 22:46
It is awesome.
22:46 → 22:47
Yeah.
22:47 → 22:47
And it's like...
22:47 → 22:48
I saw it this morning.
22:48 → 22:49
Yeah.
22:49 → 22:51
It was far more honest than the New York Times.
22:51 → 22:51
It's great.
22:51 → 22:52
Yeah.
22:52 → 22:57
We put a lot of effort to ensuring that community notes does not get gamed or have biases.
22:57 → 23:00
It simply cares about what is the most important thing.
23:00 → 23:03
It simply cares about what is the most accurate thing.
23:04 → 23:09
And sometimes truth can be a little bit elusive, but you can aspire to get closer to it.
23:09 → 23:10
Yes.
23:13 → 23:19
And I think the effect of community notes is more powerful than people may realize,
23:19 → 23:24
because once people know that they could get noted, community noted on Twitter, then they'll
23:24 → 23:27
think more carefully about what they say.
23:28 → 23:29
They're likely...
23:29 → 23:32
Basically, it's an encouragement to be more truthful and less deceptive.
23:32 → 23:36
When you jumped into this, though, when you bought it, did you understand...
23:36 → 23:37
Well, clearly you understood its importance.
23:37 → 23:38
You wouldn't have bought it.
23:39 → 23:40
Twitter, yes.
23:40 → 23:40
Right.
23:40 → 23:44
But it's not the biggest, but it's the most important in the social media companies.
23:44 → 23:50
But did you understand the kind of ferocity you'd be facing, the attacks you'd be facing from
23:51 → 23:52
power centers in the country?
23:53 → 23:56
I thought there'd probably be some negative reactions.
23:58 → 24:03
I'm sure everyone would not be pleased with it.
24:04 → 24:10
But at the end of the day, if the public is happy with it, that's what matters.
24:12 → 24:13
And the public will speak with their actions.
24:16 → 24:19
If they find Twitter to be useful, they will use it more.
24:19 → 24:22
And if they find it to be not useful, they will use it less.
24:22 → 24:25
If they find it to be the best source of truth, I think they will use it more.
24:26 → 24:33
You know, there's obviously a lot of organizations that are used to having sort of unfettered
24:33 → 24:37
influence on Twitter that no longer have that.
24:37 → 24:41
We used to put New York Times of their badge this morning, and then you called them diarrhea.
24:41 → 24:42
You called them...
24:43 → 24:43
You did.
24:43 → 24:44
You did.
24:44 → 24:45
I'm just quoting you.
24:45 → 24:48
You described their Twitter feed as diarrhea.
24:48 → 24:51
I said it was the Twitter equivalent of diarrhea.
24:51 → 24:52
OK, it's not literally diarrhea.
24:52 → 24:55
No, it's a metaphor.
24:56 → 24:58
But an accurate one.
24:59 → 25:03
So, I mean, if you look at the NY Times Twitter feed, it's unreadable.
25:04 → 25:09
Because what they do is that they tweet every single article, even the ones that are boring,
25:09 → 25:11
even ones that don't make it into the paper.
25:12 → 25:16
So it's just nonstop, zillion tweets a day with no...
25:18 → 25:21
You know, they really should just be saying, like, what are the top tweets?
25:21 → 25:21
Yeah.
25:22 → 25:24
What are the big stories of the day?
25:25 → 25:28
I don't know, put out like 10 or something.
25:28 → 25:32
You know, some number that's manageable, as opposed to right now, if you were to follow
25:33 → 25:38
at NY Times on Twitter, you're going to get barraged with like hundreds of tweets a day.
25:38 → 25:40
And your whole feed will be filled with NY Times.
25:40 → 25:48
So this is something I would recommend actually for all publications, which is, for your primary
25:48 → 25:50
feed, only put out your best stuff.
25:50 → 25:55
I think I know a thing or two about how to use Twitter, because I was the most interacted
25:55 → 26:00
with account on the whole system before the acquisition, before the acquisition closed.
26:00 → 26:02
I didn't have the most number of followers, but I had the most number of interactions.
26:03 → 26:06
And so I clearly know something about how to use Twitter.
26:06 → 26:10
People's attention is limited, so just make sure you put the stuff that's most important there.
26:10 → 26:18
So because you and people like you do interact on Twitter, it's obviously enormously powerful
26:18 → 26:20
in shaping public opinion.
26:20 → 26:22
It's where a lot of ideas and trends are incubated.
26:22 → 26:22
Yeah.
26:22 → 26:23
You know it, that's why you bought it.
26:23 → 26:24
Absolutely.
26:24 → 26:28
It's also a magnet for intel agencies from around the world.
26:28 → 26:32
And one of the things we learned after you started opening the books is that they were
26:32 → 26:34
exerting influence from within Twitter.
26:35 → 26:36
I mean, it was absurd.
26:39 → 26:40
Did you know that going in?
26:40 → 26:41
No.
26:42 → 26:48
Since I've been a heavy Twitter user since 2009, it's sort of like I'm in the matrix.
26:48 → 26:52
I can see, do things feel right?
26:52 → 26:53
Do they not feel right?
26:53 → 26:56
What tweets am I being shown as recommended?
26:58 → 27:01
I get a feel, what accounts are making comments?
27:02 → 27:05
Where are the comments eerily similar?
27:07 → 27:10
And then you look at the account and it's just obviously a fake photo.
27:10 → 27:15
And it's just obviously a bot cluster over and over again.
27:15 → 27:20
I started to get more and more uneasy about the Twitter situation.
27:20 → 27:22
I started to feel like something's wrong in the state of Denmark here.
27:24 → 27:26
Something feels wrong about the platform.
27:26 → 27:28
It seemed to be just drifting in a...
27:29 → 27:31
I couldn't place it exactly.
27:32 → 27:34
It felt like it was drifting in a bad direction.
27:34 → 27:35
So then I was like...
27:36 → 27:43
And my conversations with the board and management seem to confirm my intuition about that.
27:43 → 27:46
But basically I was convinced these guys do not care about fixing Twitter.
27:49 → 27:53
And I had a bad feeling about where I was headed based on the conversations I had with them.
27:53 → 27:55
So then I was like, you know what?
27:57 → 28:01
I'll try acquiring it and see if acquiring it is possible.
28:03 → 28:04
Now I didn't have enough cash to acquire it.
28:04 → 28:09
So I would need support from others, from some of the existing investors.
28:09 → 28:11
I would also need a lot of debt.
28:11 → 28:16
And so it wasn't clear to me whether an acquisition would succeed, but I thought I would try.
28:16 → 28:19
And ultimately it did succeed.
28:19 → 28:20
Anyway, here we are.
28:21 → 28:25
But when you got there and all of a sudden you own it and all the data on the servers belongs to you...
28:26 → 28:28
It belongs to the people in my view, but yes.
28:28 → 28:30
But you can see what it is.
28:30 → 28:31
And you can see what they've been doing.
28:31 → 28:33
And you can see who's been working there.
28:33 → 28:39
You were shocked to find out that various intel agencies were affecting its operations?
28:39 → 28:46
The degree to which various government agencies had effectively had full access to everything
28:46 → 28:48
that was going on on Twitter blew my mind.
28:49 → 28:50
I was not aware of that.
28:50 → 28:51
Would that include people's DMs?
28:53 → 28:53
Yes.
28:56 → 28:58
Yes, because the DMs are not encrypted.
28:58 → 29:02
So one of the first, you know, one of the things that we're about to release is the
29:02 → 29:03
ability to encrypt your DMs.
29:03 → 29:08
That's pretty heavy duty though, because a lot of well-known people, reporters talking
29:08 → 29:11
to their sources, government officials, the richest people in the world, they're DMing
29:11 → 29:12
each other.
29:13 → 29:17
And the assumption, obviously, was incorrect, but was that that's private, but that was
29:17 → 29:19
being read by various governments?
29:20 → 29:21
Yeah, that seems to be the case.
29:23 → 29:23
It's scary.
29:24 → 29:25
Yes, it is.
29:25 → 29:32
So like I said, we're moving to have the DMs be optionally encrypted.
29:32 → 29:35
I mean, you know, there's like a lot of DM conversations which are, you know, just chatting
29:35 → 29:35
with friends.
29:35 → 29:36
For sure.
29:36 → 29:37
Not important.
29:37 → 29:37
Of course.
29:38 → 29:42
That's hopefully coming out later this month, but no later than next month, is the ability
29:42 → 29:45
to toggle encryption on or off.
29:45 → 29:49
So if you are in a conversation you think is sensitive, you can just toggle encryption
29:49 → 29:52
on and then no one on Twitter can see what you're talking about.
29:52 → 29:54
I could put a gun to my head and I couldn't tell.
29:54 → 29:55
I couldn't.
29:57 → 29:58
That's sort of the gun to the head test.
29:58 → 30:03
If somebody puts a gun to my head and can I still not see your DMs, that should be,
30:03 → 30:04
that's the acid test.
30:04 → 30:05
Yes.
30:05 → 30:09
And that's how it should be if you want your—
30:09 → 30:11
Have you had complaints from various governments about doing this?
30:12 → 30:14
I haven't had direct complaints to me.
30:14 → 30:17
I've had sort of like some indirect complaints.
30:17 → 30:20
I think people are a little concerned about complaining to me directly in case I tweet
30:20 → 30:21
about it.
30:23 → 30:24
You know.
30:26 → 30:27
They're like, uh-oh.
30:28 → 30:30
So they're sort of trying to be more roundabout than that.
30:30 → 30:36
You know, I mean, if I got something that was unconstitutional from the U.S. government,
30:36 → 30:40
my reply would be to send them a copy of the First Amendment and just say like,
30:40 → 30:41
what part of this are we getting wrong?
30:43 → 30:46
You have a lot of government contracts.
30:46 → 30:47
What part of this are we getting wrong?
30:47 → 30:47
Please tell me.
30:47 → 30:50
I mean, it's a pretty—no, I'm just saying, but you're kind of exposed to other businesses.
30:50 → 30:55
So this is, just in case our viewers aren't following this, this is not, you're not just
30:55 → 30:57
like a journalist taking a stand on behalf of the First Amendment.
30:57 → 31:01
You're a guy with big government contracts giving the finger to the government.
31:01 → 31:09
Do you think Twitter will be as central to this presidential campaign as it was in the
31:09 → 31:09
last several?
31:10 → 31:15
I think it will play a significant role in elections, not just domestically, but internationally.
31:15 → 31:21
The goal of new Twitter is to be as fair and even-handed as possible.
31:22 → 31:32
So not favoring any political ideology, but just being fair at all.
31:32 → 31:33
Why doesn't Facebook do this?
31:33 → 31:37
I know that Zuckerberg has said, and I take him at face value, that he—well,
31:38 → 31:43
I do actually in this way, that he is a kind of old-fashioned liberal who doesn't like to
31:43 → 31:44
censor.
31:44 → 31:49
He has, but he, you know, like, why wouldn't a company like that take the stand that you
31:49 → 31:50
have taken?
31:50 → 31:57
It's pretty rooted in American traditional political custom, you know, for free speech.
31:58 → 32:04
My understanding is that Zuckerberg spent $400 million in the last election,
32:05 → 32:09
nominally in a get-out-the-vote campaign, but really fundamentally in support of Democrats.
32:09 → 32:10
Is that accurate or not accurate?
32:10 → 32:11
That is accurate.
32:12 → 32:14
Does that sound unbiased to you?
32:14 → 32:15
No, it doesn't.
32:15 → 32:15
Yes.
32:16 → 32:22
So you don't see hope that Facebook will approach this as a non-aligned arbiter?
32:22 → 32:24
You've allowed Donald Trump back on Twitter.
32:24 → 32:27
He hasn't taken you up on your offer because he's got his own thing.
32:27 → 32:27
Right.
32:27 → 32:29
Do you think he will go back on Twitter?
32:31 → 32:32
Well, that's obviously up to him.
32:33 → 32:43
My job is to—I take freedom of speech very seriously.
32:43 → 32:47
So it's—you know, I didn't vote for Donald Trump.
32:47 → 32:48
I actually voted for Biden.
32:48 → 32:53
Not saying I'm a huge fan of Biden because I think that would probably be inaccurate.
32:53 → 32:58
But, you know, we have difficult choices to make in these presidential elections.
32:58 → 33:07
It's not—I would prefer, frankly, that we put someone, just a normal person as president,
33:08 → 33:14
a normal person with common sense and whose values are smack in the middle of the country,
33:14 → 33:17
you know, just, you know, center of the normal distribution.
33:17 → 33:20
And I think that would be great.
33:20 → 33:25
You know, I think we have made maybe being president not that much fun, you know,
33:27 → 33:27
to be totally frank.
33:30 → 33:35
Public doesn't trust big news organizations anymore because they're so obviously filthy
33:35 → 33:36
and dishonest.
33:36 → 33:40
They don't trust social media companies because, as you just heard, Facebook is working for the
33:40 → 33:41
Democratic Party.
33:41 → 33:45
We asked Elon Musk how he's going to make Twitter a place that people can trust
33:45 → 33:49
and how he's going to do that after firing 80 percent of his staff.
33:49 → 33:50
That's next.
33:55 → 33:59
So the headline here so far is that before Elon Musk bought it, Twitter wasn't so much
33:59 → 34:05
a social media site as a honey trap operated by global intelligence agencies, including our own.
34:05 → 34:10
One of the very first things Elon Musk did was fire all the spies who worked at Twitter.
34:10 → 34:14
Then he fired a lot of other people, too, including the entire PR department, the H.R.
34:14 → 34:18
department and a lot of other useless baggage who weren't helping the company do anything
34:18 → 34:19
worth doing.
34:19 → 34:20
And what happened next?
34:20 → 34:21
We asked him.
34:24 → 34:32
Met a shrinking pie, obviously, for most of the traditional media companies and made them
34:32 → 34:38
more desperate to get clicks, to get attention.
34:40 → 34:52
And when they're in a desperate state, they will then tend to really push headlines that
34:52 → 34:54
get the most clicks, whether those headlines are accurate or not.
34:55 → 35:03
And so it's resulted in, in my view, I think most people would agree, a less truthful,
35:03 → 35:04
less accurate news.
35:08 → 35:09
Because they just got to get a rise out of people.
35:11 → 35:18
And I think it's also increased the negativity of the news, because I think we humans instinctually
35:19 → 35:20
respond more to negative.
35:20 → 35:31
I think we have an instinctual negative bias, which kind of makes sense in that, like, let's
35:31 → 35:38
say, it's more important to remember where was the lion or where was the tribe that wants
35:38 → 35:42
to kill my tribe than where is the bush with berries?
35:43 → 35:48
One's like a permanent negative outcome and the other is like, well, I might go hungry.
35:49 → 35:56
So meaning that there's an asymmetry and sort of an evolved asymmetry in negative versus
35:56 → 35:57
positive stuff.
35:58 → 36:03
And also, historically, the negative stuff would have been quite proximate, like it would
36:03 → 36:10
have been near, represented a real danger to you as a person if you heard negative news.
36:11 → 36:16
Because historically, like a few hundred years ago, we're not hearing about what negative
36:16 → 36:17
things are happening on the other side of the world.
36:18 → 36:20
Or on the other side of the country, we're only we're hearing about negative things in
36:20 → 36:25
our village, things that could actually have a bad effect on you.
36:25 → 36:30
Whereas now we're hearing about, I mean, the news very often seems to attempt to answer
36:30 → 36:32
the question, what is the worst thing that happened on Earth today?
36:33 → 36:36
And you wonder why you said after reading that, you know?
36:36 → 36:39
Do you read any legacy media outlets?
36:39 → 36:42
I mean, I really get most of my news from Twitter at this point.
36:43 → 36:47
It is the number one news source, I think, in the world at this point.
36:47 → 36:49
What percentage of your staff did you fire at Twitter?
36:50 → 36:51
One of the great business stories of the year.
36:52 → 36:56
I think we're about 20% of the original size.
36:56 → 36:57
So 80% left?
36:58 → 36:58
Yes.
36:59 → 36:59
So-
37:00 → 37:02
A lot of people voluntarily-
37:02 → 37:02
Sure, sure.
37:04 → 37:06
80% are gone from the day it took over.
37:06 → 37:06
That's correct, yes.
37:06 → 37:09
So how do you run the company with only 20% of the staff?
37:10 → 37:14
It turns out you don't need all that many people to run Twitter.
37:14 → 37:15
But 80%?
37:15 → 37:15
That's a lot.
37:16 → 37:17
Um, yes.
37:19 → 37:24
I mean, if you're not trying to run some sort of glorified activist organization,
37:26 → 37:28
and you don't care that much about censorship,
37:28 → 37:31
then you can really let go of a lot of people, it turns out.
37:35 → 37:38
How many others, without naming names, but how many-
37:38 → 37:41
I had dinner with somebody who runs a big company recently,
37:41 → 37:43
he said, I'm really inspired by Elon.
37:43 → 37:44
And I said, the free speech stuff?
37:44 → 37:46
He goes, no, the staff stuff.
37:48 → 37:53
How many other CEOs have come to you to talk about this?
37:54 → 37:55
I spend a lot of time at work.
37:55 → 37:58
So it's not like I'm meeting with lots of people.
37:58 → 38:00
They see what actions I've taken.
38:04 → 38:08
But I think we just had a situation at Twitter where it was absurdly overstaffed.
38:08 → 38:10
You know, so it wasn't-
38:11 → 38:14
You look at, say, what does it really take to operate Twitter?
38:15 → 38:19
You know, most of what we're talking about here is a group text service at scale.
38:21 → 38:24
Like, how many people are really needed for that?
38:24 → 38:24
You know?
38:26 → 38:28
And if you look at the, say, like,
38:29 → 38:33
what has been the product development over time with Twitter?
38:33 → 38:36
And you say, like, you know, years versus product improvements,
38:36 → 38:38
and it's like a pretty flat line.
38:39 → 38:40
So what are they doing?
38:40 → 38:41
You know?
38:42 → 38:45
It took a year to add an edit button that doesn't work most of the time.
38:46 → 38:47
I mean, this is-
38:47 → 38:49
I feel like there's a comedy situation here.
38:52 → 38:53
You're not making cars, you know?
38:54 → 38:58
It's very difficult to make cars or get rockets to orbit.
38:58 → 39:04
So, you know, the real question is, like, how did it get so absurdly overstaffed?
39:04 → 39:05
This is insane.
39:06 → 39:08
So anyway, that's-
39:08 → 39:09
And it's clearly working.
39:10 → 39:12
In fact, I think it's working better than ever.
39:13 → 39:18
We've increased the responsiveness of the system by, in some cases, over 80%.
39:18 → 39:22
We're trying to make Twitter the most trusted place on the internet,
39:22 → 39:24
the least untrustworthy place on the internet.
39:24 → 39:25
I don't think anyone should trust the internet,
39:25 → 39:29
but maybe we can make Twitter the least untrustworthy.
39:29 → 39:36
Like I said, try to get the truth to the people as best we can.
39:36 → 39:40
When Elon Musk took over Twitter, the company had something called a human rights team.
39:41 → 39:43
There was no measurable increase in human rights around the world.
39:43 → 39:46
In fact, Twitter was doing its best to crush human rights,
39:46 → 39:49
starting with the most basic, which is the right to say what you really think.
39:50 → 39:54
Elon Musk, in his spare time, runs the world's biggest rocket company.
39:55 → 39:59
Couldn't resist asking him if he ever sees anything out there in space that's not human.
39:59 → 40:00
So we did.
40:00 → 40:00
That's next.
40:06 → 40:09
Elon Musk also runs the world's biggest rocket company,
40:09 → 40:12
so we couldn't resist asking him about aliens.
40:12 → 40:12
Of course we did.
40:15 → 40:20
A lot of people ask me, you know, where are the aliens?
40:20 → 40:25
And I think if anyone would know about aliens on Earth, it would probably be me.
40:25 → 40:26
I would think.
40:26 → 40:29
Yeah, I'm, you know, very familiar with space stuff.
40:30 → 40:32
Tomorrow night, his full answer on that,
40:33 → 40:39
and also his views on why civilizations rise and why they fall and what we can do about it.
40:39 → 40:43
Elon Musk has famously decided to help repopulate the Earth himself.
40:44 → 40:47
We're going to tell you his thinking behind that, or rather he will explain it.
40:47 → 40:50
A fascinating second part of our interview with Elon Musk
40:50 → 40:53
tomorrow night right here at 8 p.m. Eastern.
40:53 → 40:55
We've got a new documentary, by the way, on Fox Nation called Let Them Eat Bugs.
40:55 → 40:59
Just in case you're wondering what happens to your tax dollars,
41:00 → 41:03
they're sending it to scientists to create a tastier cockroach.
41:05 → 41:05
That's on Fox Nation.
41:05 → 41:07
We'll be back tomorrow night.
41:07 → 41:09
Have the best night with the ones that you love,
41:09 → 41:11
and we'll see you in about 23 hours.
Elon Musk discusses the potential dangers of artificial intelligence and the need for government oversight in a conversation with Tucker Carlson.
This video in English was translated to English, Español, Português on April 18, 2023, using Targum.video AI translation service.
