Toward Ubiquitous Intelligence: The Future of Information Access Across Digital and Physical Realms
Description
As large language models (LLMs) become the primary gateway to information, we are witnessing a fundamental shift in how people search, learn, and engage with digital content. Increasingly, information is accessed not through websites or traditional search engines, but through conversational agents and emerging AR/XR platforms promising to blend digital context with the physical world. This transition raises a critical challenge: how can AI systems provide reliable, personalized, and context-aware experiences when the objects around us are often difficult for models to identify or reason about, especially when they fall outside an AI system’s training data?
I will illustrate this challenge through a sequence of projects that reflect my research trajectory. I begin with my applied work at Adobe, where we develop tools to help businesses and creators adapt content for LLM-based discovery. I will then turn to Augmented Object Intelligence, an end-to-end XR pipeline that demonstrates how AI can enable real-world object interaction (“XR-Objects"") — while also revealing the limitations of vision-only systems, which can hallucinate object identities. To address this, I introduce ubiquitous metadata, a research agenda focused on embedding unobtrusive, machine-readable markers directly into physical objects. Through systems including SensiCut, StructCode, G-ID, InfraredTags, BrightMarker, and Imprinto, which are publications I worked on during my PhD at MIT CSAIL, I will show how new fabrication and sensing techniques can provide reliable object identification to bridge digital and physical contexts. Together, these projects outline a path toward a future in which ubiquitous AI assistants seamlessly understand both our queries and the physical objects we interact with, with the potential to transform information access across digital and physical realms.