No recommendation or advice is being given as to whether any investment is suitable for a particular investor or a group of investors, and is for informational purposes only. It should not be assumed that any investments in securities, companies, sectors or markets identified and described were or will be profitable. No recommendations to buy, sell or hold are being made rather we intend to express what opportunities are in the market at any point in time. Full disclaimer at the end of the article.
While most of the automotive industry has adopted a "belt-and-suspenders" approach by combining cameras, radar, and Lidar, Tesla has made a bold and controversial bet on a "vision-only" system. CEO Elon Musk's core philosophy is that if humans can navigate the complexities of driving with just two eyes (our biological cameras), a sufficiently advanced AI should be able to achieve superhuman performance with a sophisticated camera system and a powerful neural network.
1. The Hardware: The Eyes and the Brain
At the heart of Tesla's FSD is a synergy between its sensory hardware (the "eyes") and its custom-built processing powerhouse (the "brain").
The "Eyes" - A Suite of Eight Cameras: Tesla vehicles are equipped with eight external cameras that provide a 360-degree view of the environment. These cameras have varying focal lengths and purposes:
Forward-Facing Cameras: A trio of cameras (narrow, main, and wide-angle) located in the windshield housing provides detailed, overlapping views of the road ahead, detecting traffic lights, signs, and distant objects at up to 250 meters.
Side and Rear Cameras: Cameras in the B-pillars and front fenders provide crucial information about adjacent lanes, blind spots, and crossing traffic. A rear-facing camera is essential for parking and detecting vehicles approaching from behind.
Tesla Vision: In a pivotal move, Tesla began removing forward-facing radar from its new vehicles in 2021, and later, ultrasonic sensors. This "Tesla Vision" strategy forces the system to rely entirely on camera data for everything from cruise control to collision avoidance, a decision that underscores its commitment to the vision-only path.
The "Brain" - The FSD Computer: Processing the immense amount of visual data in real-time requires immense computational power.
Custom Tesla FSD Chip: Dissatisfied with off-the-shelf solutions, Tesla designed its own FSD computer chip. The current generation in wide circulation, Hardware 3 (HW3), is a powerful System-on-a-Chip (SoC) featuring two Neural Network Accelerators. It's capable of processing 2,300 frames per second.
Hardware 4 (HW4) and Beyond: Newer vehicles are being equipped with HW4, which boasts significantly more processing power and supports higher resolution 5-megapixel cameras (up from 1.2 megapixels in HW3). This upgrade is crucial for improving object detection and preparing for more advanced autonomous capabilities. The eventual goal is to have hardware that can support Level 4/5 autonomy.
2. The Software: From Code to End-to-End AI
The true magic of FSD lies in its software, which has undergone a radical transformation.
The Old Way - Heuristics and C++: Early versions of Autopilot and FSD relied on over 300,000 lines of explicit C++ code. Engineers would manually write rules for countless driving scenarios: "If you see a red octagon, stop," or "If a car merges, adjust speed." This approach is brittle and cannot possibly account for the infinite "edge cases" encountered in real-world driving.
The New Way - End-to-End Neural Networks (FSD v12): The latest versions of FSD represent a paradigm shift. Tesla is moving to an "end-to-end" AI model. Instead of hard-coding rules, the system learns to drive by watching video clips from Tesla's massive fleet.
How it Works: The neural network takes in raw camera footage ("photons in") and outputs direct driving controls—steering, acceleration, and braking ("controls out"). It learns the complex relationship between what it "sees" and the appropriate driving action by being trained on millions of video clips of exemplary human driving.
Fleet Learning: This is Tesla's most significant competitive advantage. With millions of vehicles on the road, Tesla has access to an unparalleled dataset of real-world driving scenarios. Every time a driver makes a correction or the system encounters a new situation, that data can be used to further train and improve the neural network. This creates a powerful data feedback loop.
Simulation and Data Labeling: To supplement real-world data, Tesla runs billions of miles in simulation to test the AI against rare and dangerous scenarios. Furthermore, they are developing sophisticated automated systems to label the vast amounts of video data required for training, a process that has traditionally been a costly and time-consuming human endeavor.
3. Current Capabilities and Limitations
It is crucial to understand that despite its name, Full Self-Driving is currently a Level 2 driver-assistance system.The driver must remain fully attentive and prepared to take over at any moment.
What it Can Do (FSD Beta v12):
Navigate on Autopilot: Guide the car from a highway on-ramp to off-ramp, including making lane changes and taking interchanges.
Autosteer on City Streets: Navigate complex urban environments, stopping for traffic lights and stop signs, making turns, and navigating around obstacles.
"Human-like" Driving: With the new v12 architecture, users report a much smoother and more natural driving style, as the AI mimics the nuances of human driving rather than following rigid, robotic rules. It can, for example, adjust its speed based on the flow of traffic rather than just the posted speed limit.
Where it Struggles:
Unprotected Left Turns: These remain one of the most challenging maneuvers, requiring the system to judge the speed of oncoming traffic from multiple directions.
Unusual Scenarios & "Edge Cases": The system can still be confused by unusual road closures, complex construction zones, or the unpredictable behavior of pedestrians and cyclists.
Weather and Lighting: While improving, heavy rain, snow, fog, and direct sun glare can still degrade camera performance, posing a significant challenge for a vision-only system.
"Phantom Braking": Though reduced, there are still instances of the car braking unexpectedly for perceived but non-existent hazards.
4. The Financial Angle: High Risk, High Reward
For your financial newsletter, the investment implications of Tesla's FSD strategy are profound:
The Scalability Bet: If Tesla solves autonomy with cameras and a powerful computer, the cost to deploy it across millions of vehicles is purely a software margin. This represents a massive potential revenue stream through one-time purchases and monthly subscriptions, which are already in place.
The Robotaxi Vision: The ultimate goal is a Tesla-owned autonomous ride-hailing network. A successful "Robotaxi" fleet would transform Tesla from a car manufacturer into a high-margin transportation-as-a-service provider, potentially justifying its high valuation. Recent reports in June 2025 of a fully autonomous delivery of a Model Y from the factory to a customer's home, with no one in the vehicle, signal a major step toward this reality.
The "Black Box" Risk: A key challenge with end-to-end AI is its "black box" nature. It can be difficult to determine exactly why the AI made a specific decision. This presents significant regulatory and liability hurdles. A crash involving an FSD vehicle raises complex questions about accountability that the industry and insurers are still grappling with.
The Competitive Moat: The "data advantage" from its fleet is a powerful moat. Competitors using Lidar may have more precise sensing in some areas, but they lack the sheer volume of real-world driving data that Tesla uses to train its AI, which Tesla argues is the key to solving general-purpose self-driving.
In essence, Tesla's FSD is not just a feature; it is a fundamental technological and financial gamble. Its success hinges on the belief that vision and AI can not only match but exceed the safety and reliability of systems that rely on the perceived safety net of Lidar. The outcome of this high-stakes debate will undoubtedly define the next decade of the automotive industry.
Although we obtain information contained in our newsletter from sources we believe to be reliable, we cannot guarantee its accuracy as they are public sources. The opinions expressed here in the Focus on Risk Silicon Valley newsletter are ours, our editors and contributors and we may change without notice at any time. Any views or opinions expressed here do not reflect those of the organization as a whole. The information in our newsletter may become outdated in time and we have no responsibility or obligation to update it. Additionally the information in our newsletter is not intended to represent individual investment advice and nothing herein constitutes investment, legal, accounting or tax advice. No recommendations to buy, sell or hold are being made rather we intend to express what opportunities are in the market at any point in time. Rather it is for informational purposes.
No recommendation or advice is being given as to whether any investment is suitable for a particular investor or a group of investors. It should not be assumed that any investments in securities, companies, sectors or markets identified and described were or will be profitable. We strongly advise you to discuss your investment options with your financial adviser prior to making any investments, including whether any investment is suitable for your specific needs.
The information provided in our newsletter is private, privileged, and confidential information, licensed for your sole individual use as a subscriber. Focus on Risk reserves all rights to the content of this newsletter. Forwarding, copying, disseminating, or distributing this newsletter in whole or in part, including substantial quotation of any portion of the publication or any release of specific investment recommendations, is strictly prohibited.