Perplexity CEO Aravind Srinivas explains that despite rapid advances in artificial intelligence, AI systems still fundamentally depend on humans for guidance, judgment, and meaningful results. While AI models can generate text, summarize documents, and surface information quickly, Srinivas argues that they are not replacements for human thought or expertise. Instead, AI is most effective when paired with human insight, interpretation, and critical decision-making.
A key point Srinivas makes is that AI lacks true understanding and context. Current models process patterns in data and produce outputs that appear coherent, but they do not possess the depth of comprehension that humans apply when assessing complex situations. This limitation means AI can make confident-sounding errors or miss nuanced implications that matter in fields like law, medicine, journalism, and policy. Humans are still needed to verify, refine, and interpret AI outputs.
Srinivas also emphasizes the importance of human values, ethics, and accountability in AI deployment. Since AI systems reflect patterns learned from existing data, they can inadvertently perpetuate biases or produce harmful side effects if not carefully supervised. Human involvement is necessary to set priorities, enforce ethical standards, and ensure that technology serves societal goals rather than amplifies existing problems. This role extends from developers and engineers to end users who must recognize when AI should be questioned or overridden.
Finally, the Perplexity CEO suggests that AI and humans are complementary rather than competitive. Rather than viewing AI as a threat to jobs, Srinivas sees it as a tool that can augment human capabilities—helping people find information faster, explore complex topics, and automate routine tasks. The real value of AI, he argues, lies not in replacing human expertise but in scaling human potential, enabling people to focus on creativity, judgment, and deep problem-solving while machines handle repetitive or data-intensive work.