

What’s even more interesting is that Anthropic’s list of supporters didn’t include the usual suspects among deep tech investors. The Series B round was led by the CEO of FTX Trading, Sam Bankman-Fried and included participation from Jaan Tallinn, co-founder of Skype, Infotech’s James McClave and former Google CEO Eric Schmidt. Not even a year later, it raised another USD 580 million. The young company picked up USD 124 million in funding then. Founded by OpenAI‘s former VP of research Dario Amodei along with his sister, Daniela, the startup was formed less than a year back with nine other OpenAI employees. Startups like Anthropic were started with a very different intention than this. VCs were more keen to put their money on more tedious applications that were focused on transforming an existing industry rather than a distant moonshot.Ĭo-founder and CEO of Anthropic Dario AmodeiĮxplainability in commercial AI and academic research However, most of the growth in explainable AI was expected to arise in industries like banking, health care and manufacturing-essentially, areas which placed a high value on trust and transparency and required accountability from these AI models. A report by Gartner stated that “30% of government and large enterprise contracts will require XAI solutions by 2025”. Explainable AI started gaining traction among VCs, with firms like UL Ventures, Intel Capital, Light Speed and Greylock seen actively investing in it.

There was a wave of core AI startups like Kyndi, Fiddler Labs and DataRobot that integrated explainable AI within them. Until a couple of years ago, explainable AI witnessed its time in the spotlight.

This search for clarity pushed a lot of interest into interpretability and the money followed later. Marcus has said that this has created a vicious cycle where companies are caught in a trap to pursue benchmarks instead of the foundational ideas of intelligence. While machines can now recognise patterns in data, this understanding of the data is largely superficial and not conceptual-making the results difficult to determine. Researcher and author Gary Marcus has often pointed out how contemporary AI’s dependence on deep learning is flawed due to this gap. Pretty much only Anthropic seems to be really working on this type of research at the moment but I expect this type of research direction to be increasingly important as compute and large models become more and more widely available,” he stated. “Understanding how these new AI/ML models work at low level is key to this part of the scientific journey of AI and calls for more research on interpretability and diving in the inner-working of these new models. Wolf noted that enthusiasts like him, who saw AI as a way to unlock deeper insights on human intelligence, now seemed to believe that even though we are seemingly inching closer towards intelligence, the concept of what it was still eluded us. Thomas Wolf, co-founder of Hugging Face, articulated these fears in a post on LinkedIn. The concept of Explainable AI as demonstrated in a DARPA report Need for AI Interpretability
