[ English | Japanese ] Lab introduction slides
How Intelligent Can Computers Become?
Theory, Algorithm, and Application of Machine Learning
With the rapid advancement of information and communication technology, intellectual activities such as reasoning and creativity, once thought to be exclusive to humans, are now being realized by computers. At the Sugiyama Laboratory, we conduct research on intelligent information processing technologies in the field of artificial intelligence, specifically known as machine learning, under the theme of "How intelligent can computers become?"
- Development of Learning Theories
Generalization refers to the ability to handle unknown situations that have not been encountered during learning, which is essential for computers to behave intelligently. In our laboratory, we mathematically explore the mechanisms by which computers can acquire generalization abilities, based on probability theory and statistics.
- Development of Learning Algorithms
Machine learning encompasses various challenges, including supervised learning, where computers infer from data paired with inputs and outputs; unsupervised learning, which involves learning from input-only data; and reinforcement learning, which optimizes long-term decision-making through interaction with the environment. Our laboratory develops highly practical machine learning algorithms with a solid theoretical foundation.
- Real-World Applications of Machine Learning Technology
With the development and widespread adoption of artificial intelligence technologies, massive amounts of real-world data--—such as text, speech, images, videos, behavior, and economic data—--as well as experimental observation data in fields such as physics, chemistry, biology, medicine, astronomy, and robotics, are being collected. Our laboratory collaborates with domestic and international companies and research institutes to tackle real-world problems using cutting-edge machine learning algorithms.
Visual Intelligence for the Real World — Beyond Human Perception
Yokoya Laboratory conducts research on visual information processing to understand complex real-world scenes. Positioned at the intersection of computer vision and remote sensing, our work aims at computational imaging and scene understanding that go beyond human perception across multiple scales and modalities.- Computational Imaging and Inverse Problems
By integrating sensing and computation, we aim to overcome hardware limitations such as resolution, noise, and occlusion. Our research involves the development of mathematical models and algorithms based on optimization, signal processing, and machine learning to recover original scenes or signals from incomplete and noisy observations. We tackle challenging inverse problems in high-dimensional and multimodal image reconstruction and enhancement.
- Scene Understanding and Multimodal Fusion
To understand complex real-world environments, we work on semantic and geometric reconstruction, digital twin generation, and interactive scene interpretation. This involves fusing optical, thermal, radar, and LiDAR data from multi-platform sensors, including satellites, drones, and ground systems. We also explore vision-language models for generalizable and interactive scene understanding, and address limited training data through self-supervised learning, synthetic data, and simulation-based approaches.
- Remote Sensing for Societal Impact
Remote sensing enables large-scale observation, from inaccessible natural environments to densely built urban areas. We develop algorithms for mapping and monitoring geospatial features such as urban structures, forest biomass, agricultural patterns, and disaster damage. Through the OpenEarthMap project, we aim to achieve geographic fairness in mapping performance by building open datasets and developing resource-efficient algorithms.
Towards Practical and Reliable Machine Learning
Ishida Laboratory started in 2021 and is currently conducting fundamental research to develop machine learning algorithms. For example, we are building algorithms to learn from weak supervision and regularizers that can alleviate overfitting. We aim to make machine learning more practical and reliable through our research.- Weakly Supervised Learning
To perform ordinary supervised learning successfully, we need a large amount of labeled data. However, it is often costly to collect appropriate supervision or difficult to collect due to business constraints. To utilize an alternative source of inexpensive supervision, we are working on weakly supervised learning such as complementary-label learning and positive-confidence learning.
- Overfitting and Regularization
In real-world applications, we often encounter many challenges that hinder us from achieving high prediction performance, such as learning from a small amount of data or learning with label noise. To cope with these situations, we are working on making machine learning more robust and proposing regularization methods to alleviate overfitting.
- Evaluation of Machine Learning Models and Data
Given a trained machine learning model, how will we know if there is any room left for improvement? If the model has reached the best achievable error, it is meaningless to continue aiming for error improvement. Hence, knowing the best achievable error is helpful since it enables us to compare it with the trained model’s error. This can also be used as a criterion of the dataset difficulty. We are developing methods that can be used to evaluate models and datasets, such as directly estimating the best achievable error.