Swarit Jasial

Swarit Jasial
NAIST

Analysis of Pan-Assay Interference Compounds in Screening Data

Polypharmacology is an emerging theme in pharmaceutical research. It refers to increasing evide nce that the therapeutic efficacy of many drugs depends on multi-target engagement. In the context of polypharmacology, compound promiscuity has been defined as the ability of small molecules to specifically interact with multiple targets as opposed to engaging in non-specific interactions. Accordingly, promiscuity should be clearly distinguished from undesired assay interference or aggregator characteristic of compounds which give rise to many false-positive readouts in high-throughput screening. Based on individual studies and chemical expertise, nearly 500 compound classes have been designated as pan-assay interference compounds (PAINS) which might be reactive under assay conditions. PAINS are typically contained as substructures in larger compounds. Herein, interference characteristics of PAINS have been computationally investigated by systematically analyzing publicly available screening data and determining activity profiles of screening compounds with PAINS substructures. Furthermore, the limitations of PAINS filters have been addressed using machine learning models which help in refining and extending the concept of PAINS.

Kai Kunze

Kai Kunze
Keio University

From Cognition-Aware Interactions towards Augmenting Human Senses

This talk starts with an overview of interactions using eyewear computing, focusing on sensor-equipped smart glasses ( http://eyewear.pro ). I discuss sensing and interaction capabilities on smart eyewear to track reading activities, cognitive functions, facial expressions in everyday life etc. towards applications in Virtual and Augmented Reality using physiological signals for subtle interactions. Finally, I will introduce a couple of application cases related to Augmented Sports, Mindfulness, and Augmented Human. I end with a small discussion about the potential of cross-modal correspondence and artificial muscle setups for immersive VR/AR applications.

Lei Ma

Lei Ma
Kyushu University

Towards Quality Assurance of Deep Learning Engineering

With the recent tremendous success in many cutting-edge applications over the past decade, deep learning has become a key driving force of the next-generation innovated technology in many industrial domains, e.g., image processing, speech recognition, autonomous driving, medical diagnosis. However, we have been the witness of many quality issues for the current state-of-the-art deep learning systems, such as Tesla/Uber accident, Siri/Alexa manipulated by hidden commands. Software testing is among the most widely used technique for quality assurance in traditional software industry. However, the quality assurance of deep learning system is still at a very early stage. In this talk, I would start to discuss the fundamental differences in testing and engineering of traditional software and deep learning system, followed by presenting our current practices in testing deep learning system and future trends towards addressing the urgent industrial demands in large scale deployment of intelligent system and solutions.

Thaer Dieb

Thaer Dieb
National Institute for Materials Science

Machine Learning Assisted Materials Design and Discovery

The recent advances in machine learning and data science, encouraged researchers to utilize these domains to solve problems in natural science fields. This combination resulted in emerging new interdisciplinary research domains such as bioinformatics, and more recently the materials informatics. In materials informatics, researchers are investigating the use of informatics principles to support materials research aiming at reducing the development cycle of new materials that currently takes about 20 years.In this presentation, we will discuss the use of data science and machine learning techniques in accelerating the inverse design problem of structural materials with desired properties, which, traditionally, depends on personal experience and expensive trial-and-error experiments. This problem is represented as selecting the optimal solution from a search space of candidates. Given a space of candidates S, a machine learning assisted design follows an iterative process of {selection-> experimentation-> feedback}. Starting from a random selection, and within a computational budget, the algorithm, evaluates its selection and uses the feedback for a more informed selection in the next iteration. Simulation or first principles calculations often replace the actual experiments. We will present different approaches used in and discuss their advantages and limitations.

Shota Nakamura

Shota Nakamura
Research Institute for Microbial Diseases, Osaka University

Infection metagenomics: microbiome study on infectious diseases

Advances in metagenomic studies which comprehensively analyze microbial populations are revealing that the human is a superorganism in symbiosis with various microorganisms. We have been conducting metagenome research on the variation of intestinal microbiota at the onset of infection and its application to comprehensive pathogen detection. Along with the rapidly evolving Next-Generation Sequencing (NGS) technologies, we have enhanced our sample processing and data analysis capabilities. However, NGS technologies are further evolving and producing huge amounts of metagenomic data. In this seminar, I will introduce current projects which include the intestinal microbiome of patients with various diseases, metagenomic detection of pathogenic microbes, and recent progress in big data processing.

Daron Standley

Daron Standley
Institute for Virus Research, Osaka University

Spatiotemporal models of immune synapses

B cell receptors and their soluble form, antibodies, are unique among proteins in that they undergo affinity-driven evolution through gene recombination and somatic hypermutation. As a result, of their high affinity and specificity to foreign or endogenous antigens, disease-specific antibodies are a potentially attractive class of disease biomarkers. In the clinic, antibodies are the fastest growing segment of therapeutic compounds and also play essential roles in basic research as reagents. In spite of these qualities, the tools available to analyze antibody-antigen binding affinity and specificity are quite limited. Recent breakthroughs in single cell sequencing have enabled determination of antibody sequences in specific tissues in a high-throughput manner. Nevertheless, methods to make use of such data generally limited to sequence-based clustering or low-throughput structural modeling. To address these needs, we have recently developed a set methods to enable functional analysis of antibody sequences. These tools include: high-throughput atomic-resolution modeling of antibodies or T cell receptors (Schritt, D. et al. Mol Sys Des Eng, 2019), epitope-specific clustering of antibody sequences (Li, S. et al. Mol Sys Des Eng, 2019), sequence- and structure-based epitope prediction (in prep), and prediction of antibody-antigen affinity (in prep). The basic design of these tools and their performance on high-throughput paired sequence datasets will be described.

Romain Fontugne

Romain Fontugne
Internet Initiative Japan - Innovation Institute

Monitoring Internet Health at Scale

Networks connected to the Internet are inherently relying on third-party networks to communicate. To ensure reliable connectivity operators require a good understanding of the conditions of multiple remote networks on the Internet. But because they have poor visibility beyond their network’s border this task is difficult and time consuming. The Internet Health Report leverages data collected by large measurement platforms (e.g. RIPE Atlas, RIS, and RouteViews) to automatically pinpoint connectivity issues or routing changes that may have detrimental effects on other networks. This project started from three recent research advances that permit to monitor AS inter-dependence, delay and forwarding anomalies, and network disconnections from traceroute and BGP data. The talk will introduce these data-mining techniques and present recent network events monitored with these tools.

Matthew J. Holland

Matthew J. Holland
Osaka University

Robust learning systems: stronger statistical guarantees at tolerable cost

When building a bridge or flying an airplane, it is perfectly common to ask for clear conditions under which one can guarantee a particular level of safety and efficiency with high confidence. Unfortunately, in machine learning, this reasonable request is decidedly harder to answer, since we typically do not have an accurate model of the underlying phenomena. In this talk, we discuss how most popular learning algorithms provide little in the way of performance guarantees, while on the other hand, procedures with strong guarantees tend to be completely impractical for large-scale tasks. As a starting point toward re-building the standard machine learning methodology with stronger statistical guarantees, we will introduce some new algorithmic techniques and applications using novel feedback mechanisms, which have been designed to leverage robust estimation sub-routines in a computationally efficient manner.

Soma Boubou

Soma Boubou
Omron Corporation

An insight into 3D Vision-based AI

In 1990, the first commercially available digital camera was introduced to the market. The availability of digital images opened up entirely the new research domain of 2D computer vision, which mainly deals with insight extraction and analysis of images and videos. However, although 2D data is useful for many computer vision tasks, the world is 3D and traditional cameras flatten the 3D world into two dimensions’ data which lacks the full geometry of the sensed objects and scenes. The limitation of the traditional 2D vision-based machine learning along with the advances achieved in the 3D acquisition devices open up the era of 3D vision-based AI. With the recent advancements in robotics and autonomous vehicles, rich 3D information has proven to be quite valuable in assisting computers with a human-like perception of the physical world. This talk will give an insight into the analysis of 3D object data. In details, several techniques to deal with 3D objects understanding will be introduced.