Transcript: Tesla Autopilot Press Conference Call

This is a transcription of the press conference that Elon Musk gave on Oct. 14, 2015 announcing the release of the v7.0 software. Many thanks to the good folks at Electrek for making the audio and slides available. The audio quality of the recording is a bit challenging, so the transcript below (which was done by me) is far from perfect. Corrections are welcome. – Bruce

[Elon Musk speaking] So we have a kind of an exciting announcement, which is the release of Autopilot. It’s Autopilot version one and we still think of it as sort of a public beta, so we want people to be quite careful at first with the use of Autopilot. What I’m going to take you through is how the system learns over time.

The thing that I think is interesting and unique is that we’re employing a “fleet learning technology”. Essentially, the network of vehicles is going to be constantly learning. As we release the software, and more people enable Autopilot, the information about how to drive is uploaded to the network. So each driver is effectively an expert trainer in how the Autopilot should work.

I’ll say a little bit about how that works. It’s a combination of a variety of systems. This can only really be done as a connected vehicle. So here’s the thing: every car made by Tesla from late September last year overnight will have this ability. So I think that’s quite unique that we can upload a substantial new capability just through software overnight. Basically any car that has the sensors will have this ability. The sensors were put in the car about a year ago. The capability will keep improving over time, both from the standpoint of all the expert drivers doing approximately a million miles a day of travel and training but also in terms of the software functionality.

This version of Autopilot, for example, does not take into account stoplights, stop signs and red lights. But a future software update will, when it’s out, and will get more and more refined over time. Then there’s a feature I’ve been promising for a while, which will come in 7.1, which is to automatically have your car put itself to bed in your garage. So you just tap your phone, and have the car just put itself in the garage and close the door, and also to summon the car back. That’ll be in 7.1. So there will be a lot of cool capabilities that will get better over time as well as just general refinements.

 

Slide 1: So you have the car on the road and the question is, how does it figure out what to do? There are four major sensor systems.

 

Slide 2: We’ve got the ultrasonic sensors, essentially ultrasonic sonar, which tells us where everything is within about 5.2 meters or roughly 16-17 feet of distance. So around the perimeter of the car we know where there are obstacles.

 

Slide 3: That’s then combined with the forward-facing camera with image recognition. The forward-facing camera is able to determine where the lanes are, where cars are ahead of it, and it’s also able to read signs. It’s been able to read speed signs for a while, for example, but it’s able to read pretty much any sign.

 

Slide 4: This is combined with the forward radar. The radar is very good at detecting fast-moving large objects. It can actually see through fog, rain, snow, and dust. So the forward radar gives the car superhuman senses. It can see through things that a person cannot.

 

Slide 5: The final sensor is the GPS with high-precision digital maps. The high-precision digital maps are very important because normal maps have quite low precision. All that’s needed is to know where a street is. But the actual curvature of the road, how many lanes there are, how you merge one lane to the next–this is not present in any dataset in the world. But we’re creating that dataset at Tesla.

 

Slide 6: So then these all combine, so we can use camera, radar, ultrasonics, and the GPS with high-precision maps to guide the car on its journey.

 

Slide 7: Have people had a chance to take a test drive? Here we’re seeing where the Autosteering is using different visual cues, different road cues, to decide where to drive. So it will pick a left lane or right lane or both, to determine whether it should follow vehicles, whether it should do probabilistic path prediction or whether it should use the navigation database. Depending on where it is, it’s constantly looking up where it is in the world. Depending on it specific location, it will know whether to use the left lane marking, the right lane marking, follow vehicles, use probabilistic path prediction or to go purely on navigation by GPS.

This really depends on where it is in the world. I don’t know if you’ve been down the 280 for example, but if you’re in the rightmost lane on the 280 you’ll see the car actually theoretically red out the right lane because it knows better to not take a turnoff. It should stay in its lane and not do a turnoff. You’ll see at one point southbound on the 280 the rightmost lane takes an abrupt shift to the left. It just steps to the left arbitrarily. The car does not change its position in the lane because it knows to ignore that sudden step change in the rightmost lane.

 

Slide 8: This is an example of one of the hardest problems we had to solve. Where does it get super hard? This is the 405 South just before you get to LAX. You can see how hard it is. This is actually kind of quite a good trip we took this morning. It’s quite hard as a person to say where to be. We really need better lane markings in California. This is crazy. If you were in Germany or Japan or China this would be great, it would be easy. You can actually see clear lane markings [there]. But if you have that… I mean this is the ??? of the lane, little blips, I don’t know why, marking out the lane.

What becomes really problematic in a situation like this is that where the concrete berm is and where the old lane markings used to be are diverging from where the current lane markings are. The problem we encountered, which is quite vexing to solve, was that the vision system could not figure out which is the actual real lane.

Normally you can exclude strange pigments on the road, like skid marks, because they’re not where the lane is. But in this case you have the true lane position and the sort of fake lane position and they’re diverging. So the camera system would then follow the diverging system and go into the wrong lane. In order to solve this at this point the car knows that it actually needs to go on navigation GPS. So its lateral position will be guided by the GPS or its lane position, and ignore visual.

A sort of funny anecdote that we had was, when we did this we realized we’ve got to have times when the system will automatically revert to navigate on GPS when the visual cues are actually misleading. So we had one of our drivers in the car actually drive this exact section to precisely map out the lane. We knew for sure where the lanes would be. Then we implemented the system and yet once again the car would change lanes when it shouldn’t change lanes. This is because the human driver, who was a trained driver, actually made the lane change wrong. It was quite perplexing for a while, like, why is it going on GPS and making the wrong move? It was because the reference driver actually made the wrong move. We corrected that and now you can actually do this in any of the lanes and it will hold position correctly. It will actually do better than a person.

 

Slide 9: This is to give you a sense of the level of precision that the Tesla fleet is obtaining in terms of figuring out where roads are, where parking lots are. This is all just in a statistical database. There’s no user attribution. We don’t know who it was, when it was, we just know that this is where a road exists, this is where cars have gone, statistically speaking. You can see that the Tesla user fleet has basically mapped out the entire area in this map and all the way down to the parking lot. You can actually see where in the parking lot people were. And what constitutes a real parking versus not.

 

Slide 10: This is a normal navigation map, which is fine for general directions but it’s not great for figuring out where the car can actually go. Whereas this is a high-precision map and you can see that each lane is mapped out and you know exactly what the transitions are. You know that, for example, here you don’t make an abrupt right, 90-degree ???, you actually make a curve. And you can see places like here, if you were to follow the GPS you would likely put the curve [here] when actually what you want to do is do a curve like that.

So that’s the basic presentation. I think this is going to be quite a profound experience for people when they do it. We’ve been testing it for over a year, so we got quite used to it. But I’ve noticed that when I put friends of mine in the car and they see the car drive, they’re blown away. It’s really quite an interesting new experience. I think it’s going to change people’s perception of the vehicle, quite rightly.

[Q&A follows, but that is not transcribed here]

Leave a Reply

Your email address will not be published. Required fields are marked *