Shreya Ishita

Shreya Ishita

Shreya is an Enterprise AI Strategy Principal at Google, focusing on the commercialization and socio-technical integration of applied AI for global enterprises. Grounded in her background in mathematics and physics, she operates at the intersection of industry practice and the philosophy of AI, to investigate the architectural limits of current machine learning paradigms. Specifically, she explores the distinction between epistemic information processing and ontic sense-making to better understand the boundary conditions for robust causal reasoning in AI systems. By mapping these theoretical constraints, Shreya aims to inform responsible AI governance, workforce policy, and safety frameworks in high-stakes enterprise deployments.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract The current pursuit of robust machine intelligence is largely predicated on a substrate independent, computational functionalist view of cognition, where sufficiently complex computational processing is expected to eventually yield generalized reasoning. This paper explores the ontological distinctions between these computational frameworks and biological cognition, specifically how these differences impact the capacity for semantic understanding. By analyzing phenomena such as the "reversal curse" where models fail to generalize the symmetry in identity relations (A=B implies B=A), and performance on novel reasoning benchmarks (e.g., ARC-AGI), this paper examines whether current model limitations are transient artifacts of scale or indicative of a distinct architectural category. Integrating Stevan Harnad’s “symbol grounding problem” with Evan Thompson’s biological model of “intrinsic normativity,” I investigate whether robust general intelligence might require sense-making: a process distinct from information processing, whereby an agent’s internal states are causally coupled with its environment via survival or system-wide stakes which grounds symbols in meaning. Current Large Language Models (LLMs) appear to lack this intrinsic normativity, and consequently may operate primarily as epistemic instruments rather than ontic agents. By introducing the concept of “ontic grounding”, this paper presents a potential framework for distinguishing between the simulation of reasoning and true understanding, which could have implications for AI safety and governance. View details
    ×