Andrej Karpathy, the deep learning expert and computer vision specialist who was appointed five years ago by Tesla as its Director of AI and was the leader of the Autopilot vision, has announced that he is now officially leaving Tesla.
Karpathy was in a four-month absence absence that sparked speculation over the likelihood of his come back.
In a tweet published on Wednesday afternoon Karparthy said “It’s been a satisfaction to support Tesla in its pursuit of its goals in the past five years, and it was a tough decision to end the partnership. Since then, Autopilot graduated from lane keeping to city streets, and I’m looking forward to seeing the exceptionally solid Autopilot team carry on that success.”
Karpathy admitted that he didn’t have specific plans of what he could do in the future and said he plans at spending more of his time “revisiting my long-term passions around technical work in AI, open source and education.”
Sources have previously informed TechCrunch the report that Karpathy is thinking about some venture investment.
Karpathy’s announcement comes just as Tesla stated in an California regulatory filing that it was cutting 229 data annotation workers who make up Tesla’s larger Autopilot team, and closing its San Mateo, California office in which they were employed.
Before joining Tesla in the year 2017, Karpathy was a researcher at OpenAI, an artificial intelligence non-profit founded by Elon Musk. Karpathy has a vast experience in the field of AI and was the inventor for one of the top acclaimed deep learning courses offered in Stanford University.
His position at Tesla was focused specifically on computer vision software developed to aid Autopilot, the Autopilot sophisticated driver aid system harks to his research. His dissertation Karpathy was focusing on developing a system where neural networks could recognize various distinct and specific objects in an image, then label them with natural language, and provide a report to the user. It also included the development of an algorithm that functions in reverse. This made it possible for the creation of a model that could use descriptions written in the natural language (e.g., “black dress”) and locate the object within an image.