[ English | Japanese ] Lab introduction slides
How Intelligent Can Computers Become?
Theory, Algorithm, and Application of Machine Learning
With the rapid advancement of information and communication technology, intellectual activities such as reasoning and creativity, once thought to be exclusive to humans, are now being realized by computers. At the Sugiyama Laboratory, we conduct research on intelligent information processing technologies in the field of artificial intelligence, specifically known as machine learning, under the theme of "How intelligent can computers become?"
- Development of Learning Theories
Generalization refers to the ability to handle unknown situations that have not been encountered during learning, which is essential for computers to behave intelligently. In our laboratory, we mathematically explore the mechanisms by which computers can acquire generalization abilities, based on probability theory and statistics.
- Development of Learning Algorithms
Machine learning encompasses various challenges, including supervised learning, where computers infer from data paired with inputs and outputs; unsupervised learning, which involves learning from input-only data; and reinforcement learning, which optimizes long-term decision-making through interaction with the environment. Our laboratory develops highly practical machine learning algorithms with a solid theoretical foundation.
- Real-World Applications of Machine Learning Technology
With the development and widespread adoption of artificial intelligence technologies, massive amounts of real-world data--—such as text, speech, images, videos, behavior, and economic data—--as well as experimental observation data in fields such as physics, chemistry, biology, medicine, astronomy, and robotics, are being collected. Our laboratory collaborates with domestic and international companies and research institutes to tackle real-world problems using cutting-edge machine learning algorithms.
Visual Intelligence for the Real World — Beyond Human Perception
Yokoya Laboratory conducts research on visual information processing to understand complex real-world scenes. Positioned at the intersection of computer vision and remote sensing, our work aims at computational imaging and scene understanding that go beyond human perception across multiple scales and modalities.- Computational Imaging and Inverse Problems
By integrating sensing and computation, we aim to overcome hardware limitations such as resolution, noise, and occlusion. Our research involves the development of mathematical models and algorithms based on optimization, signal processing, and machine learning to recover original scenes or signals from incomplete and noisy observations. We tackle challenging inverse problems in high-dimensional and multimodal image reconstruction and enhancement.
- Scene Understanding and Multimodal Fusion
To understand complex real-world environments, we work on semantic and geometric reconstruction, digital twin generation, and interactive scene interpretation. This involves fusing optical, thermal, radar, and LiDAR data from multi-platform sensors, including satellites, drones, and ground systems. We also explore vision-language models for generalizable and interactive scene understanding, and address limited training data through self-supervised learning, synthetic data, and simulation-based approaches.
- Remote Sensing for Societal Impact
Remote sensing enables large-scale observation, from inaccessible natural environments to densely built urban areas. We develop algorithms for mapping and monitoring geospatial features such as urban structures, forest biomass, agricultural patterns, and disaster damage. Through the OpenEarthMap project, we aim to achieve geographic fairness in mapping performance by building open datasets and developing resource-efficient algorithms.
Machine Evaluation and Learning
Artificial intelligence (AI) systems now improve so quickly that the traditional evaluation pipeline, i.e., building a benchmark dataset, beating it with a better model, and repeating this process, struggles to keep up. As tasks grow broader, more difficult, and more open-ended, the dataset benchmarks required to test them demand months or years of expert effort. Furthermore, models are approaching or surpassing the point where humans can supply high-quality guidance or supervision. Our lab works at this shifting frontier, developing tools that let the AI research community measure progress reliably and teach powerful models with minimal, imperfect human abilities and supervision.
- AI Evaluation
We create data-centric and meta-benchmarking techniques that can show when a benchmark is saturated, guide its redesign, and quantify the performance upper bound and performance gap of an AI system. Our goal is to build a foundation of evaluation tools that continues to signal progress in AI, even as models potentially become superhuman.
- Scalable Supervision
Anticipating a future in which AI models outperform human experts across a wide range of tasks, we design algorithms that learn from imperfect human signals and other forms of scalable oversight mechanisms. The goal is to keep stronger models aligned with human intent even when high-quality human supervision is no longer practical.