top of page
image.png
Anant Madabhushi, PhD
When: Friday, December 6th, 2024, 10am-11am EST
Title:  Computational Imaging and AI for Precision Medicine

Abstract: Traditional biology generally looks at only a few aspects of an organism at a time and attempts to molecularly dissect diseases and study them part by part with the hope that the sum of knowledge of parts would help explain the operation of the whole. Rarely has this been a successful strategy to understand the causes and cures for complex diseases. The motivation for a systems based approach to disease understanding aims to understand how large numbers of interrelated health variables, gene expression profiling, its cellular architecture and microenvironment, as seen in its histological image features, its 3 dimensional tissue architecture and vascularization, as seen in dynamic contrast enhanced (DCE) MRI, and its metabolic features, as seen by Magnetic Resonance Spectroscopy (MRS) or Positron Emission Tomography (PET), result in emergence of definable phenotypes. Within our group has been developing novel computerized knowledge alignment, representation, and fusion tools for integrating and correlating heterogeneous biological data spanning different spatial and temporal scales, modalities, and functionalities. These tools include computerized feature analysis methods for extracting subvisual attributes for characterizing disease appearance and behavior on radiographic (radiomics) and digitized pathology images (pathomics). In this talk I will discuss the development work in our group on new radiomic and pathomic approaches for capturing intra-tumoral heterogeneity and modeling tumor appearance. I will also focus my talk on how these radiomic and pathomic approaches can be applied to predicting disease outcome, recurrence, progression and response to therapy in the context of prostate, brain, rectal, oropharyngeal, and lung cancers. Additionally, I will also discuss some recent work on looking at use of pathomics in the context of racial health disparity and creation of more precise and tailored prognostic and response prediction models.

Bio: Dr. Anant Madabhushi is the Robert W Woodruff Professor of Biomedical Engineering; and on the faculty in the Departments of Pathology, Biomedical Informatics, Urology and Radiology and Imaging Sciences at Emory University. He is also a Research Health Scientist at the Atlanta Veterans Administration Medical Center. Dr. Madabhushi has authored over 500 peer-reviewed publications and more than 100 patents issued or pending. He is a fellow of the American Institute of Medical and Biological Engineering (AIMBE), and the Institute for Electrical and Electronic Engineers (IEEE) and the National Academy of Inventors (NAI). His work on "Smart Imaging Computers for Identifying lung cancer patients who need chemotherapy" was called out by Prevention Magazine as one of the top 10 medical breakthroughs of 2018. In 2019, Nature Magazine hailed him as one of 5 scientists developing "offbeat and innovative approaches for cancer research". Dr. Madabhushi was named to The Pathologist’s Power List in 2019, 2020, 2021 and 2022.

Bratislav_Misic_Photo_2.jpg
Bratislav Misic, PhD 
When: Friday, November 22nd, 2024, 10am-11am EST
Watch a video of the talk here
Title: Towards a biologically annotated connectome

Abstract: The brain is a network of interleaved neural circuits. In modern connectomics, brain connectivity is typically encoded as a network of nodes and edges, abstracting away the rich biological detail of local neuronal populations. Yet biological annotations for network nodes - such as gene expression, cytoarchitecture, neurotransmitter receptors or intrinsic dynamics - can be readily measured and overlaid on network models. I will discuss how connectomes can be represented and analyzed as annotated networks. Annotated connectomes allow us to reconceptualize architectural features of networks, and to relate the connection patterns of brain regions to their underlying biology. Emerging work demonstrates that annotated connectomes help to make more veridical models of brain network formation, neural dynamics and disease propagation. Finally, annotations can be used to infer entirely new inter-regional relationships and construct new types of networks that complement existing connectome representations. Altogether, biologically annotated connectomes offer a compelling way to study neural wiring in concert with local biological features.

Bio:  Bratislav Misic is an associate professor at the Montreal Neurological Institute at McGill University. He completed his undergraduate degree in Neuroscience and Mathematics at the University of Toronto. He trained with Randy McIntosh at the University of Toronto (Ph.D.) and with Olaf Sporns at Indiana University (postdoc). At the Montreal Neurological Institute, he leads the Network Neuroscience Lab (https://netneurolab.github.io). He studies how global dynamics, cognitive operations and complex behaviour emerge from the connections and interactions among distributed brain areas. He uses multimodal neuroimaging (MRI, M/EEG, PET) to map and model patterns of neural connectivity. His group pursues several research themes, from modeling communication processes on anatomical networks, to statistical models of network architecture and disease propagation.

9cb6c101-1ece-42c8-bb76-eafd96a10f48 (2).jpeg
David van Dijk, PhD, MSc 
When: Friday, November 8th, 2024, 10am-11am EDT
Watch a video of the talk here
Title: Learning the Language of Biology: Transforming Biomedical Discovery with Foundation Models and Causal Inference

AbstractIn this talk, I will showcase the work of my lab in revolutionizing biomedical data analysis through foundation models and large language models (LLMs). First, we introduce CINEMA-OT, a causal-inference-based approach using optimal transport for single-cell perturbation analysis. CINEMA-OT allows individual treatment-effect analysis, response clustering, and synergy analysis, revealing potential mechanisms in airway antiviral response and immune cell recruitment. Next, we present CaLMFlow, combining flow matching with integral equations and causal language models. By fine-tuning LLMs on flow matching and conditioning on natural language prompts, CaLMFlow predicts single-cell perturbation responses and performs protein backbone generation. We then explore "Cell2Sentence" (C2S), a technique translating single-cell transcriptomics into a language for LLMs. C2S automates the generation of natural language insights directly from biological data and generates cells based on textual prompts, enhancing data interpretation and synthesis. Additionally, I will discuss "BrainLM," the first fMRI foundation model to decode brain activity, predict clinical variables, and improve our understanding of brain function and disease. Finally, I will present some of our efforts to integrate foundation models with graphs with the aim to leverage pre-trained textual and non-textual foundation models for graph-based tasks.

Bio: Dr. David van Dijk is an Assistant Professor in the departments of Computer Science and Internal Medicine at Yale University, where he leads a research group focused on developing cutting-edge machine learning (ML) and artificial intelligence (AI) algorithms for large-scale biomedical datasets. His research interests span the application of foundation models, large language models (LLMs), graph representation learning, and neural operator learning to model spatiotemporal systems in biology and medicine. Dr. van Dijk completed his PhD in Computer Science at the University of Amsterdam and the Weizmann Institute of Science, where he utilized ML techniques to decipher the complex links between DNA sequence and gene activity. He then pursued postdoctoral fellow positions at Columbia University and Yale University, where he developed advanced manifold learning and machine learning algorithms specifically tailored for single-cell genomic data. Currently, Dr. van Dijk's research focuses on developing innovative algorithms to model and analyze a wide range of biomedical data, including single-cell RNA sequencing, electronic health records, medical imaging, and brain activity recordings. His lab is at the forefront of applying foundation models and LLMs to extract meaningful insights from these diverse and complex datasets. By leveraging the power of advanced AI techniques, Dr. van Dijk aims to uncover novel patterns, predict clinical outcomes, and drive groundbreaking discoveries in biomedical research. Dr. van Dijk's contributions to the field have been recognized by awards such as the Dutch Research Council Rubicon fellowship and the NIH R35 MIRA award.

gregory-goldgof-221018-004_rt_1200x800.jpg
Gregory Goldgof, MD, PhD, MS
When: Friday, November 1st, 2024, 10am-11am EDT
Title: Liquid AI: Precision diagnoses through cytomorphology

Abstract:  Advancements in artificial intelligence (AI) are transforming diagnostic medicine, particularly in hematology. This talk will explore "Liquid AI," or approaches that leverage deep learning models to analyze cytomorphology from blood and other liquid-based specimens. By extracting morphological patterns and embedding features from individual cells, Liquid AI offers precise, automated diagnoses and predictive insights for hematologic cancers and other diseases. 

Bio: Dr. Gregory Goldgof, MD, PhD, MS, is an Assistant Member at Memorial Sloan Kettering Cancer Center and an Assistant Professor at Weill Cornell Medicine. He serves as the Director of Artificial Intelligence and Digital Pathology for the Hematopathology Service within the Department of Pathology and Laboratory Medicine at MSK. Board-certified in Clinical Informatics and Clinical Pathology, Dr. Goldgof holds degrees in Computer Science, Bioengineering, Biomedical Sciences, and Biology. His research focuses on developing AI-driven tools for diagnostic and outcome prediction in hematology and oncology.

Unknown.png
Maggie Delano, PhD 
When: Friday, October 25, 2024, 10am-11am EST
Watch a video of the talk here
Title:  Designing Inclusive Medical Machine Learning Datasets: Challenges and Opportunities

Abstract: While medical applications of machine learning have been explored for decades, there has been increasing research interest in machine learning for medical applications and a sense that its integration of medicine may be "inevitable." As researchers, we must take care to ensure that the machine learning systems we design do not reinforce or even exacerbate existing health inequities. This talk will discuss the challenges and opportunities for designing inclusive medical machine learning datasets, using sex/gender data in electronic health records and racial bias in pulse oximetry measurements as two examples.

Bio: Prof. Maggie Delano is an Associate Professor of Engineering at Swarthmore College. Their research focuses on the development of inclusive medical technologies, with an emphasis on wearables and machine learning. Their current research projects focus on sex/gender variables in medicine and racial bias in pulse oximetry measurements. They have published multiple articles related to gender and machine learning, particularly how the use of sex and gender related variables in modern medicine are exclusionary of trans and nonbinary people. Prof. Delano received their PhD in electrical engineering and computer science with a minor in women's and gender studies from MIT in 2018. Further information about Prof. Delano's work can be found on their website: https://www.maggiedelano.com/.

Maryam_Shanechi_100919_cropped.jpg
Maryam Shanechi, PhD, SM
When: Friday, October 18th, 2024, 10am-11am EST
Watch a video of the talk here.
Title: AI-based closed-loop neurotechnologies

Abstract: I will present our work at the interface of AI and neuroscience to develop next-generation brain-computer interfaces that can model, decode, and regulate the activity of large populations of neurons in brain disorders such as major depression. First, I present a dynamical modeling framework that can decode brain states such as mood from human brain network activity. Then, I show how we can also predict the effect of external inputs such as electrical stimulation on brain network activity toward closed-loop regulation of neural states. I also extend our modeling framework to dissociate behaviorally relevant neural dynamics that can otherwise be missed, such as those during naturalistic movements. I also present how these models can incorporate multiple spatiotemporal scales of brain activity simultaneously. Finally, I discuss the challenge of developing deep learning algorithms for real-time neurotechnologies. I present an artificial neural network that enables accurate and flexible inference of brain states causally, non-causally, and even with missing neural samples, which can happen in wireless brain-computer interfaces. These AI-based neurotechnologies can help restore lost motor and emotional function in millions of patients with brain disorders.​​

Bio: Maryam M. Shanechi is the Alexander A. Sawchuk Chair and Professor in Electrical and Computer Engineering, Computer Science, Biomedical Engineering, and Neuroscience Graduate Program at the University of Southern California (USC). She is also Founding Director of the newly established USC Center for Neurotechnology. She received her B.A.Sc. degree in Engineering Science from the University of Toronto, her S.M. and Ph.D. degrees in Electrical Engineering and Computer Science from MIT, and her postdoctoral training in Neural Engineering and Neuroscience at Harvard Medical School and UC Berkeley. She conducts research at the intersection of engineering, computation, and neuroscience to develop closed-loop neurotechnology and study the brain. She is the recipient of several awards including the NIH Director’s New Innovator Award, NSF CAREER Award, ONR Young Investigator Award, ASEE’s Curtis W. McGraw Research Award, MIT Technology Review’s Top 35 Innovators Under 35, Popular Science Brilliant 10, Science News SN10, One Mind Rising Star Award, and a DoD Multidisciplinary University Research Initiative (MURI) Award. She is a Fellow of the IEEE and was named a Blavatnik National Awards Finalist in both 2023 and 2024.

Gregory Goldgof, MD, PhD, MS
When: Friday, November 1st, 2024, 10am-11am EST
Title:  TBD

Abstract: TBD

Bio: TBD

Baller_Erica_Business_Print.jpg
Erica Berlin Baller, MD, MS
When: Friday, October 4th, 2024, 10am-11am EST
Watch a video of the talk here.
Title: In sickness and in health: Emerging techniques to characterize psychiatric heterogeneity in the healthy and medically ill. 

Abstract: Psychiatric illnesses are heterogeneous in presentation and are present in both the medically healthy as well as people with medical comorbidities. However, nearly all studies exclude participants with brain diseases and therefore cannot be extrapolated to people with intracranial pathology. In this talk, Dr. Baller will describe a series of studies, blending data-data driven and hypothesis-driven techniques, that aim to characterize depression, anxiety, and cognition from adolescents to adults, and from otherwise healthy participants to patients living with multiple sclerosis. 

Bio: Dr. Erica Baller is a dedicated physician-scientist who uses diverse imaging modalities and informatics to uncover the mechanisms that underlie mood and cognition in patients with comorbid psychiatric and medical illnesses. With a strong foundation in consultation-liaison psychiatry and an extensive background in neuroimaging, Dr. Baller is a bridge-builder between the realms of clinical practice and cutting-edge research. She completed her undergraduate degree in Computer Science and Psychology from Yale University, and after medical school, completed her general psychiatry residency training at the University of Pennsylvania, Consultation-Liaison Psychiatry Fellowship at Massachusetts General Hospital, and T32 Neuropsychiatry Fellowship at the University of Pennsylvania. She currently serves as PI of the Baller Laboratory, where she is funded by a K23 and a NARSAD Young Investigator Award grant where she uses multiple sclerosis as a model to better understand how white matter disease contributes to depression, anxiety, and cognitive impairment. In addition to directing the Baller Lab, Dr. Baller serves as an Attending Psychiatrist on the Consultation-Liaison psychiatry service at the Hospital of the University of Pennsylvania and is Director of the Neuroscience Curriculum for the Penn psychiatry residency.

daniel coelho de castro.jpg
Daniel Coelho de Castro, PhD
When: Friday, September 13th, 2024, 10am-11am EST
Watch a video of the talk here.
Title: MAIRA – Multimodal AI for Radiology Applications

Abstract: Radiology reporting is a complex task that requires detailed image understanding, integration of multiple inputs, comparison with prior imaging, and precise language generation. This makes it ideal for the development and use of generative multimodal models. Our recent work extends report generation to include the localisation of individual findings on the image – a task we call grounded report generation. Grounding is important for clarifying image understanding and interpreting AI-generated text, and therefore stands to improve the utility and transparency of automated report drafting. We propose a novel evaluation framework for grounded reporting that leverages large language models (LLMs) to assesses the factuality of individual generated sentences, as well as correctness of generated spatial annotations when present. The talk will introduce MAIRA-2, a multimodal model combining our specialised image encoder with a LLM, and trained for the new task of grounded report generation on chest X-rays. MAIRA-2 uses more comprehensive inputs than explored previously: the current frontal and lateral images, the prior frontal image and report, as well as additional sections of the current report. I'll then show that these additions significantly improve report quality and reduce hallucinations, establishing a new state-of-the-art on plain findings generation on MIMIC-CXR while demonstrating the feasibility of grounded reporting as a novel and richer task.

Bio: Dr. Daniel Coelho de Castro is a senior researcher in the Biomedical Imaging team at Microsoft Research Health Futures, in Cambridge, UK. He has worked on a variety of applications of deep learning in medical image analysis—including chest radiography, computational pathology, and neuroimaging—and is particularly interested in integration of multimodal data sources. Daniel has a strong focus on combining methodological rigour, domain knowledge, and interdisciplinary collaboration to ensure reliability of machine-learning models in healthcare. Prior to joining Microsoft Research, he completed his MRes and PhD work in machine learning for medical imaging at Imperial College London, after graduating from École Centrale Paris (Dipl. Ing.) and PUC-Rio (BSc).

unnamed-2.jpg
David Ouyang, MD
WhenFriday, May 10th, 2024 11:00 am - 12:00 am
Title:  Blinded Prospective Randomized Trials of AI in Echocardiography

Abstract: Artificial intelligence (AI) has been developed for echocardiography, although not yet tested with blinding and randomization. To evaluate impact of AI in the interpretation workflow, we designed a blinded, randomized non-inferiority clinical trial (ClinicalTrials.gov NCT05140642, no outside funding) of AI vs. sonographer initial assessment of left ventricular ejection fraction (LVEF). The primary endpoint was the change in the LVEF between initial AI or sonographer assessment and final cardiologist assessment, evaluated by the proportion of studies with substantial change (>5% change). From 3769 echocardiographic studies screened, 274 studies were excluded due to poor image quality. The proportion of studies substantially changed was 16.8% in the AI group and 27.2% in the sonographer group (difference - 10.4%, 95% CI -13.2% to -7.7%, P<0.001 for noninferiority, P<0.001 for superiority). The mean absolute difference between final cardiologist assessment and independent prior cardiologist assessment was 6.29% in the AI group and 7.23% in the sonographer group (difference -0.96%, 95% CI -1.34% to -0.54%, P<0.001 for superiority). The AI-guided workflow saved time for both sonographers and cardiologists, and cardiologists were not able to distinguish between AI vs. sonographer’s initial assessments (blinding index of 0.088). For patients undergoing echocardiographic quantification of cardiac function, initial assessment of LVEF by AI was noninferior to assessment by sonographer

Bio: David Ouyang is a cardiologist and researcher in the Department of Cardiology and Division of Artificial Intelligence in Medicine at Cedars-Sinai Medical Center. As a physician-scientist and statistician with focus on cardiology and cardiovascular imaging, he works on applications of deep learning, computer vision, and the statistical analysis of large datasets within cardiovascular medicine. As an echocardiographer, he works on applying deep learning for precision phenotyping in cardiac ultrasound and the deployment and clinical trials of AI models. He majored in statistics at Rice University, obtained his MD at UCSF, and received post-graduate medical education in internal medicine, cardiology, and a postdoc in computer science and biomedical data science at Stanford University. His group works on multi-modal datasets, linking EHR, ECG, echo, and MRI data for a broad perspective on cardiovascular disease and have diverse backgrounds (ranging from physics, mechanical engineering, computer science to cardiology, anesthesia, and internal medicine).

image.png
April Khademi, PhD
When: Friday, March 22nd, 2024, 10am-11am EST
Title:  Marching towards clinical AI for digital pathology

Abstract: In this talk, I will discuss the progress made over the last decade for the implementation of clinical AI solutions for digital pathology. I will draw upon both academic and industrial experiences to discuss the various stages of growth of AI for digital pathology, including the challenges of the past, where we are now, and some perspectives for the future.

Bio: April Khademi is Canada Research Chair in AI for Medical Imaging, an Associate Professor of Biomedical Engineering at Toronto Metropolitan University and Principle Investigator of the Image Analysis in Medicine Lab (IAMLAB), which specializes in the design of AI algorithms for medical imaging. Her research is funded by CIHR, NSERC, Ontario Government, Alzheimer’s Society, Canadian Cancer Society and MITACs. April is also a Faculty Affiliate of the Vector Institute, Associate Professor (status) in Medical Imaging at the University of Toronto, Associate Scientist at St. Michael’s Hospital and Member of the Institute for Biomedical Engineering, Science & Technology (iBEST) and T-CAIREM. She had previous roles in research at the University of Guelph, GE Healthcare/Omnyx, Pathcore Inc., Sunnybrook Research Institute and Toronto Rehab Institute. She is a licensed Professional Engineer in Ontario and IEEE Senior Member.

Sajda.png
Paul Sajda, PhD
When: Friday, March 15th, 2024, 10am-11am EST
Title:  Deep Learning for Fusion and Inference in Multimodal Neuroimaging

Abstract: Simultaneous EEG-fMRI is a multi-modal neuroimaging technique that combines the advantages of both modalities, offering insights into the spatial and temporal dynamics of neural activity. In this presentation, we address the inference problem inherent in this technique by employing a transcoding framework. Transcoding refers to mapping from a specific encoding (modality) to decoding (the latent source space) and subsequently encoding the latent source space back to the original modality. Our proposed method focuses on developing a symmetric approach involving a cyclic
convolutional transcoder capable of transcoding EEG to fMRI and vice versa. Importantly, our method does not rely on prior knowledge of the hemodynamic response function or lead field matrix. Instead, it leverages the temporal and spatial relationships between the modalities and latent source spaces to learn these mappings. By applying our method to real EEG-fMRI data, we demonstrate its efficacy in accurately transcoding the modalities from one to another and recovering the underlying source spaces. It is worth noting that these results are obtained on previously unseen data, further emphasizing the robustness and generalizability of our approach. Furthermore, apart from its ability to enable symmetric inference of a latent source space, our method can also be viewed as low-cost computational neuroimaging. Specifically, it allows for generating an ‘expensive fMRI BOLD image using low-cost EEG data. This aspect highlights our approach’s potential practical significance and affordability for research and clinical applications.

Bio: Paul Sajda is the Vikram S. Pandit Professor of Biomedical Engineering and Professor of Electrical Engineering and Radiology (Physics) at Columbia University. He is also a Member of Columbia’s Data Science Institute and an Affiliate of the Zuckerman Institute of Mind, Brain, and Behavior. He received a BS in electrical engineering from MIT in 1989 and an MSE and Ph.D. in bioengineering from the University of Pennsylvania in 1992 and 1994, respectively. Professor Sajda is interested in what happens in our brains when we make a rapid decision and, conversely, what neural processes and representations drive our underlying preferences and choices, mainly when we are under time pressure. His work in understanding the basic principles of rapid decision-making in the human brain relies on measuring human subject behavior simultaneously with cognitive and physiological state. Professor Sajda applies the basic principles he uncovers to construct real-time brain-computer interfaces that improve interactions between humans and machines. He is also using his methodology to understand how deficits in rapid decision-making may underlie and be diagnostic of many types of psychiatric diseases and mental illnesses. Professor Sajda is a co-founder of several neurotechnology companies and works closely with various scientists and engineers, including neuroscientists, psychologists, computer scientists, and clinicians. He is a fellow of the IEEE, AIMBE, and AAAS. He also received the Vannevar Bush Faculty Fellowship (VBFF), the DoD’s most prestigious single-investigator award. Professor Sajda is also the current President of IEEE EMBS.

DSC06253_1000x1000.jpg
Zhi Huang, PhD
When: Friday, February 9th, 2024, 10am-11am EST
Title:  Multi-modal visual language agents for pathology co-pilot

Abstract: The lack of annotated publicly available medical images is a major barrier for computational research and education innovations. At the same time, many de-identified images and much knowledge are shared by clinicians on public forums such as medical Twitter. Here we harness these crowd platforms to curate OpenPath, a large dataset of 208,414 pathology images paired with natural language descriptions. We demonstrate the value of this resource by developing pathology language–image pretraining (PLIP), a multimodal artificial intelligence with both image and text understanding, which is trained on OpenPath. PLIP achieves state-of-the-art performances for classifying new pathology images across four external datasets: for zero-shot classification, PLIP achieves F1 scores of 0.565–0.832 compared to F1 scores of 0.030–0.481 for previous contrastive language–image pretrained model. Training a simple supervised classifier on top of PLIP embeddings also achieves 2.5% improvement in F1 scores compared to using other supervised model embeddings. Moreover, PLIP enables users to retrieve similar cases by either image or natural language search, greatly facilitating knowledge sharing. Our approach demonstrates that publicly shared medical information is a tremendous resource that can be harnessed to develop medical artificial intelligence for enhancing diagnosis, knowledge sharing and education.

Related publication: 

Huang, Z.*, Bianchi, F.*, Yuksekgonul, M., Montine, T. J., & Zou, J. (2023). A visual–language foundation model for pathology image analysis using medical Twitter. Nature Medicine, 1-10. (Nature Medicine September cover story) (*: Equal contribution)

Bio: Zhi Huang is a postdoctoral fellow at Stanford University. In August 2021, He received a Ph.D. degree from Purdue University, majoring in Electrical and Computer Engineering (ECE). Prior to that, he received his Bachelor of Science degree in Automation (BS-MS direct entry class) from Xi'an Jiaotong University School of Electronic and Information Engineering. His background is in the area of Artificial Intelligence, Digital Pathology, and Computational Biology. From May 2019 to August 2019, he was at Philips Research North America as a Research Intern.

image.png
image.png
Matteo Visconti di Oleggio Castello and Jack Gallant
When: Friday, February 2nd, 2024, 10am-11am EST
Title:  Characterizing individual differences in functional representations

Abstract: 

Individuals differ in brain anatomy and function due to natural variations, age, or disease. Investigating the extent of individual differences is necessary for accurately characterizing cognitive functions and dysfunctions in individuals. Individual differences, however, are largely ignored in conventional neuroimaging experiments, which commonly use group analyses to average out inter-subject variability and to increase statistical power. Given that mental disorders and neurodegenerative diseases disrupt normal thought processes, the lack of any principled method for assessing thought patterns in individuals places serious limits on our ability to diagnose these disorders. In this talk, we will show how voxelwise encoding models can be used in both neurotypical individuals and patients to investigate individual differences in functional representations. First, we will present a novel framework based on encoding models to quantify, localize, and characterize individual differences in high-dimensional representations of semantic knowledge. Then, we will briefly talk about our ongoing effort to investigate how fronto-temporal dementia affects representations of semantic knowledge. Taken together, these works show that participant-specific encoding models are a promising and powerful approach to characterize cognitive functions and dysfunctions in individuals.

Biographies:

Jack Gallant is Class of 1940 Chair in the Neuroscience Department at the University of California at Berkeley. Professor Gallant's research focuses on high-resolution functional mapping and quantitative computational modeling of human brain networks. His lab has created detailed functional maps of human brain networks mediating vision, language comprehension and navigation, and they have used these maps to decode and reconstruct perceptual experiences directly from brain activity.

Matteo Visconti di Oleggio Castello is a Postdoctoral Scholar at the University of California, Berkeley, working in Jack Gallant’s lab. He's interested in how each one of us uniquely builds and represents meaning from the world around us. His current research focuses on developing experimental, neuroimaging, and computational approaches to study individual differences in cognitive functions and dysfunctions. Before coming to Berkeley, he received a Ph.D. in Cognitive Neuroscience at Dartmouth. Working with Ida Gobbini and Jim Haxby, he used psychophysics and fMRI to study the perception and representation of familiar faces.

image.png
James Campbell
When: Friday, January 19th, 2024, 10am-11am EST
Title:  Representation Engineering: Bringing Neuro-Imaging To Large Language Models

Abstract: In the last few years, large language models (LLM’s) have shown rapidly increasing capabilities and widespread societal adoption in systems such as ChatGPT. Currently, however, LLM’s are understood as black boxes with very limited understanding of their internals. Moreover, existing approaches such as mechanistic interpretability tend to focus on low-level mechanisms, failing to explain high-level phenomena in large models. In a recent paper, we introduce a new field which we call Representation Engineering (RepE), advocating for a top-down view of the network inspired by neuro-imaging of the human brain. In this talk, I will explain the framework of RepE as well as our empirical results. In particular, we show that we can apply RepE to isolate and control behaviors such as honesty, hallucination, utility, power-aversion, risk, emotion, harmlessness, fairness, bias, knowledge editing, and memorization. Additionally, we use RepE to achieve state-of-the-art performance on TruthfulQA. Afterward, I will discuss how Representation Engineering compares to neuro-imaging of the human brain and describe next steps for the field.

Bio: James Campbell is a recent BA graduate from Cornell University where he was an author of “Representation Engineering” and “Localizing Lying in Llama”. He is currently working at a start-up and intends to start a PhD in Fall 2024. His main research interests are in interpretability, truthfulness, and alignment of LLM’s. In the past, he has worked on deep learning theory, robustness and on understanding representations in the brain. In his summers, he has interned at Johns Hopkins University and UC Berkeley.

SaraMB.png
Sara Mostafavi, PhD
When: Friday, November 17th, 2023, 10am-11am EST
Title:  Sequence-based deep learning models for understanding gene regulation and disease genetics

Abstract: The mammalian genome contains several million cis-regulatory elements, whose differential activity marked by open chromatin determines cellular differentiation. While the growing availability of functional genomics assays allows us to systematically identify cis-regulatory elements across varied cell types, how the DNA sequence of cis-regulatory elements is decoded and orchestrated on the genome scale to determine cellular differentiation is beyond our grasp. In this talk, I’ll present our work on applying and interpreting sequence-based deep learning models to derive an understanding of the relationship between regulatory sequence and cellular function in the context of immune cell differentiation. I will then describe how these models can be applied to understand the impact of unseen genetic variation across diverse cellular phenotypes, while discussing challenges when models are applied in such out-of-sample prediction tasks. Finally, I will discuss our recent work that improves the ability of sequence-based-models to make predictions for unseen genetic variations. In summary, our work shows that sequence-based deep learning approaches can uncover patterns of immune transcriptional regulators that are encoded in the DNA sequence, and can provide a powerful in-silico framework to mechanistically probe the relationship between regulatory sequence and its function.

Bio: Sara Mostafavi is an Associate Professor in the Paul G. Allen School of Computer Science and Engineering, at the University of the Washington (UW). Prior to joining UW, she was a faculty member in the Department of Statistics and Medical Genetics at the University of British Columbia (UBC), Canada. At UBC, she also held a Canada Research Chair in Computational Biology, and was a recipient of a CIFAR AI Chair. She completed her PhD in the Department of Computer Science at the University of Toronto, and performed her postdoctoral research at Stanford University. Her research develops and applies machine learning and statistical methods for understanding the molecular basis of cellular function and human disease.

image.png
Lena Maier-Hein, PhD
When: Friday, November 10th, 2023, 10am-11am EST
Title:  The devil is in the details: on the importance of scientific rigor in medical imaging AI.

Abstract: Intelligent medical systems capable of capturing and interpreting sensor data and providing context-aware assistance promise to revolutionize interventional healthcare. However, a number of (sometimes non-obvious) factors substantially impede successful adoption of modern machine learning research for clinical use. Drawing from research within my own group as well as large international expert consortia, I will discuss pervasive shortcomings and new solutions in current medical imaging procedures. My presentation will emphasize the need to critically question each step of the medical imaging process, from the types of images used to the validation methodological, to guarantee that advanced imaging systems are truly ready for real-world clinical use.

Bio

Lena Maier-Hein is a full professor at Heidelberg University (Germany) and managing director of the National Center for Tumor Diseases (NCT) Heidelberg. At the German Cancer Research Center (DKFZ) she is head of the division Intelligent Medical Systems (IMSY) and managing director of the "Data Science and Digital Oncology" cross-topic program. Her research concentrates on machine learning-based biomedical image analysis with a specific focus on surgical data science, computational biophotonics and validation of machine learning algorithms. She is a fellow of the Medical Image Computing and Computer Assisted Intervention (MICCAI) society and of the European Laboratory for Learning and Intelligent Systems (ELLIS), president of the MICCAI special interest group on challenges and chair of the international surgical data science initiative.

Lena Maier-Hein serves on the editorial board of the journals Nature Scientific Data, IEEE Transactions on Pattern Analysis and Machine Intelligence and Medical Image Analysis. During her academic career, she has been distinguished with several science awards including the 2013 Heinz Maier Leibnitz Award of the German Research Foundation (DFG) and the 2017/18 Berlin-Brandenburg Academy Prize. She has received a European Research Council (ERC) starting grant (2015-2020) and consolidator grant (2021-2026).

Vishwali-Mhasawade_prof.jpeg
Vishwali Mhasawade
When: Friday, November 3rd, 2023, 10am-11am EST
Title:  Advancing Health Equity with Machine Learning

Abstract: While a patient visits the hospital for treatment, factors outside the hospital, such as where the individual resides and what educational and vocational opportunities are present, play a vital role in the patient’s health trajectory. On the contrary, most advances in machine learning in healthcare are mainly restricted to data within hospitals and clinics. While health equity, defined as minimizing avoidable disparities in health and its determinants between groups of people with different social privileges in terms of power, wealth, and prestige, is the primary principle underlying public health research, this has been largely ignored by the current machine learning systems. Inequality at the social level is harmful to the population as a whole. Thus, focusing on the factors related to health outside the hospital is imperative to address specific challenges for high-risk individuals and determine what policies will benefit the community as a whole. In this talk, I will first demonstrate the challenges of mitigating health disparities resulting from the different representations of demographic groups based on attributes like gender and self-reported race. I will focus on machine learning systems using person-generated and in-hospital data from multiple geographical locations worldwide. Next, I will present a causal remedial approach to health inequity using algorithmic fairness that reduces health disparities. In the end, I will discuss how algorithmic fairness can be leveraged to achieve health equity by incorporating social factors and illustrate how residual disparities persist if social factors are ignored and concerns with the missing nature of the health data when considering social factors.

Bio: Vishwali Mhasawade is a Ph.D. candidate in Computer Science at New York University, advised by Prof. Rumi Chunara. Her research is supported by the Google Fellowship in Health. She focuses on designing fair and equitable machine learning systems for mitigating health disparities and developing methods in causal inference and algorithmic fairness. Vishwali was an intern at Google Research, Fiddler AI Labs, and Spotify Research. Her work has been recognized by Rising Stars in Data Science and Future Leaders in Responsible AI. She is also an active member of the machine learning committee, serving as the communication chair of the Machine in Learning in Health Symposium. She has been involved in mentoring roles involving high school students through the NYU ARISE program, reviewer mentoring through the Machine Learning for Health initiative, and career mentoring for Ph.D. applicants through the Women in Machine Learning program.

image.png
Tom Hartvigsen, PhD
When: Friday, October 27th, 2023, 10am-11am EST
Title:  Towards Responsible and Updatable Machine Learning in Health

Abstract: Machine learning is a promising tool for making healthcare cheap, fast, and accessible. But despite growing health data and highly accurate models, current methods remain surprisingly biased, fragile, and impractical. In this talk, I will describe two of my recent technical works on filling these important gaps, taking steps towards machine learning that can be broadly and responsibly deployed in healthcare. First, we will discuss detecting, mitigating, and leveraging implicit bias in large language models to enable their equitable use. Second, we will discuss lifelong model editing, a new path towards keeping large, expensively-trained models up-to-date in quickly-changing environments without retraining.

Bio: Tom Hartvigsen is an Assistant Professor of Data Science at the University of Virginia. He works to make machine learning trustworthy, robust, and socially responsible enough for deployment in high-stakes, dynamic settings. Tom’s research has been published at the major peer-reviewed venues in Machine Learning, Natural Language Processing, and Data Mining. He is also active in the machine learning community, serving as the General Chair for the Machine Learning for Health Symposium in 2023, helping organize the 2023 Conference on Health, Informatics, and Learning, and co-chairing workshops on time series and generative AI at NeurIPS’22 and ICML’23. Prior to joining UVA, Tom was a postdoc with Marzyeh Ghassemi at MIT’s Computer Science and Artificial Intelligence Laboratory. He holds a Ph.D. and M.S. in Data Science from WPI and a B.A. in Applied Math from SUNY Geneseo.

image.png
Helen Zhou
When: Friday, October 13th, 2023, 10am-11am EST
Title:  Towards Characterizing and Adapting to Shifts in Medical Data over Time

Abstract: As machine learning algorithms in healthcare transition from research into deployment, they face a constantly evolving environment, rife with changing clinical practices, data collection policies, patient populations, and even diseases themselves. Models that performed well in the past are liable to fail in the future, and especially in such high-stakes settings as healthcare, complacency can have consequences. In this talk, we start by empirically characterizing real-world shifts over time in medical data by examining model performance using a deployment-oriented evaluation framework (EMDOT). Inspired by the concept of backtesting, EMDOT simulates possible training procedures that practitioners might have been able to execute at each point in time, and evaluates the resulting models on all future time points. Across six distinct sources of medical data, we find varying levels of performance improvement and degradation, and we inspect surprising jumps in performance over time. Then, motivated by changes that happen in healthcare data, we introduce an idealized model of distribution shift, termed missingness shift. We introduce the problem of Domain Adaptation under Missingness Shift (DAMS), where (labeled) source data and (unlabeled) target data would be exchangeable but for different missing data mechanisms, and we derive a collection of theoretical results and strategies for DAMS.  We conclude with some open questions and future directions.

Bio:  Helen Zhou is a PhD Candidate in the Machine Learning Department at Carnegie Mellon University working with Zachary Lipton. Her research interests lie at the intersection of machine learning and healthcare, with a focus on time series and distribution shift. She is supported by the NSF Graduate Research Fellowship Program, and a 2019 Paul and Daisy Soros Fellow. Previously she received her Bachelors and MEng in Electrical Engineering and Computer Science at MIT.

unnamed-2.jpg
Anqi Wu, PhD
When: Friday, October 6th, 2023, 10am-11am EST
Title:  Understanding the Brain Using Interpretable Machine Learning Models

Abstract: Computational neuroscience is a burgeoning field embracing exciting scientific questions, a deluge of data, an imperative demand for quantitative models, and a close affinity with artificial intelligence. These opportunities promote the advancement of data-driven machine learning methods to help neuroscientists deeply understand our brains. In particular, my work lies in such an interdisciplinary field and spans the development of scientifically-motivated probabilistic modeling approaches for neural and behavior analyses. In this talk, I will first present my work on developing Bayesian methods to identify latent manifold structures with applications to neural recordings in multiple cortical areas. The models are able to reveal the underlying signals of neural populations as well as uncover interesting topography of neurons where there is a lack of knowledge and understanding about the brain. Discovering such low-dimensional signals or structures can help shed light on how information is encoded at the population level, and provide significant scientific insight into the brain. Next, I will talk about probabilistic priors that encourage region-sparse activation for brain decoding. The proposed model provides spatial decoding weights for brain imaging data that are both more interpretable and achieve higher decoding performance. Finally, I will introduce a series of works on semi-supervised learning for animal behavior analysis and understanding. I will show that when we have a very limited amount of human-labeled data, the semi-supervised learning frameworks can well resolve the scarce data issue by leveraging both labeled and unlabeled data in the context of pose tracking, video understanding, and behavioral segmentation. By actively working on both neural and behavioral studies, I hope to develop interpretable machine learning and Bayesian statistical approaches to understanding neural systems integrating extensive and complex behaviors, thus providing a systematic understanding of neural mechanisms and biological functions.

Bio: Anqi Wu is an Assistant Professor at the School of Computational Science and Engineering (CSE), Georgia Institute of Technology. She was a Postdoctoral Research Fellow at the Center for Theoretical Neuroscience, the Zuckerman Mind Brain Behavior Institute, Columbia University. She received her Ph.D. degree in Computational and Quantitative Neuroscience and a graduate certificate in Statistics and Machine Learning from Princeton University. Anqi was selected for the 2018 MIT Rising Star in EECS, 2022 DARPA Riser, and 2023 Alfred P. Sloan Fellow. Her research interest is to develop scientifically-motivated Bayesian statistical models to characterize structure in neural data and behavior data in the interdisciplinary field of machine learning and computational neuroscience. She has a general interest in building data-driven models to promote both animal and human studies in the system and cognitive neuroscience.

Picture1.jpg
Davide Momi, PhD
When: Friday, May 12th, 2023, 10am-11am EST
Watch a video of the talk here.
Title:  Dissecting the spatio-temporal connectivity dynamics of the TMS-induced signal

Abstract: 

The brain is a complex, nonlinear, multiscale, and intricately interconnected physical system, whose laws of motion and principles of organization have proven challenging to understand with currently available measurement techniques. In such epistemic circumstances, application of spatially and temporally synchronized systematic perturbations, and measurement of their effects, is a central tool in the scientific armoury. For human brains, the technological combination that best supports this non-invasive perturbation-based modus operandi is concurrent transcranial magnetic stimulation (TMS) and electroencephalography (EEG).

Spatiotemporally complex and long-lasting TMS-EEG evoked potential (TEP) waveforms are believed to result from recurrent, re-entrant activity that propagates broadly across multiple cortical and subcortical inter-connected regions, dispersing from and later re-converging on, the primary stimulation site. However, if we loosely understand the TEP of a TMS-stimulated region as the impulse response function of a noisy underdamped harmonic oscillator, then multiple later activity components (waveform peaks) should be expected even for an isolated network node in the complete absence of recurrent inputs. Thus emerges a critically important question for basic and clinical research on human brain dynamics: what parts of the TEP are due to purely local dynamics, what parts are due to reverberant, re-entrant network activity, and how can we distinguish between the two? To disentangle this, we have conducted several studies to establish the contribution of functional and structural connectivity in predicting TMS-induced signal propagation after perturbation of two distinct brain networks. 

Specifically, healthy individuals underwent two identical TMS-EEG visits where neuronavigated TMS pulses were delivered to nodes of the default mode network (DMN) and the dorsal attention network (DAN). The functional and structural connectivity derived from each individual stimulation spot were characterized via functional magnetic resonance imaging (fMRI) and Diffusion Weighted Imaging (DWI), and signal propagation across these two metrics was compared. Direct comparison between the signals extracted from brain regions either functionally or structurally connected to the stimulation sites shows a stronger activation over cortical areas connected via white matter pathways, with a minor contribution of functional projections. Furthermore, using source-localized TMS-EEG analyses and whole-brain connectome-based computational modelling we demonstrated that recurrent network feedback begins to drive TEP responses from 100 ms post-stimulation, with earlier TEP components being attributable to local reverberatory activity within the stimulated region. Subject-specific estimation of neurophysiological parameters additionally indicated an important role for inhibitory GABAergic neural populations in scaling cortical excitability levels, as reflected in TEP waveform characteristics. Overall, results provide new insights into the role of structural and functional connectome in shaping recurrent activity in stimulation-evoked brain responses. Characterizing these phenomena is important not only as a basic question in systems and cognitive neuroscience, but also as a foundation for clinical applications concerned with changes in excitability and connectivity due to neuropathologies or interventions.

Bio: Davide completed his Ph.D. at the Department of Neuroscience, Imaging and Clinical Sciences at the University G. d’Annunzio of Chieti. As part of his PhD, Davide attended a period abroad as a visiting PhD student at the Martinos Center for Biomedical Imaging in Boston. Prior to his doctoral studies, he obtained a Master’s degree in Neurosciences and Neuro-Psychological Rehabilitation from the University of Bologna and a Bachelor’s in psychology from the University of Perugia. Davide has experience with multimodal neuroimaging data (e.g. DWI, fMRI, ASL) analysis, TMS applications, EEG data collection and analysis, quantitative structural MRI assessment (e.g. brain morphometry, cortical thickness, etc.), machine learning, cognitive tasks development with a growing passion for simulations of network-level macroscale brain dynamics. His main interest is focused on predicting the TMS signal propagation at the network-level, based on neuroimaging and electrophysiological data. At the WBMG, Davide is working on several projects which will involve multimodal neuroimaging, non-invasive brain stimulation (TMS, TES) and whole-brain modelling. The primary aim of his project is to predict TMS outcomes by combining computational, neuroimaging and electrophysiological approaches.

Satrajit Ghosh, PhD
When: Friday, April 28th, 2023, 10am-11am EST
Watch a video of the talk here.
Title:  Using translational applications to unpack machine learning models and systemic challenges 

Abstract: Several of our current projects focus on the neural basis and translational applications of human spoken communication and applying machine learning to improve precision psychiatry and medicine. To carry out this work, we delve into different sensors (microphones, chat interfaces, MRI scanners, genomic, behavioral, and clinical assays) and their associated data types (audio and video signals, structured and unstructured text, volumetric, multichannel, and timeseries imaging) across many diverse sources of data. Each translational application, whether using human communication or brain imaging, has allowed us to understand and embrace complexity, and led to improvement in models and applications. In this talk, some example applications  will help illustrate how to consider the multiple facets and sources of variability and help demonstrate that delivering generalizable findings requires understanding intricate relations between many different components of data. This has resulted in a necessity for collaborations, a consideration of different and diverse tools, and a quest for richer provenance. Given the significant promise today of using linked and longitudinal data, along with automation and machine learning technologies to solve many problems in medicine, there are still significant challenges. Through the lens of some consortial projects focused on open data, science, and technology, I will discuss efforts to address technical issues related to data, infrastructure, and computational approaches, and ethical considerations that span this space.

Bio: Satrajit Ghosh is the Director of Open Data in Neuroscience Initiative and a Principal Research Scientist at the McGovern Institute for Brain Research at MIT, and an Assistant Professor of Otolaryngology - Head and Neck Surgery at Harvard Medical School. He is a computer scientist and computational neuroscientist by training. He directs the Senseable Intelligence Group (https://sensein.group/) whose research portfolio comprises projects on spoken communication, brain imaging, and informatics to address gaps in scientific knowledge in three areas: the neural basis and translational applications of human spoken communication, machine learning approaches to precision psychiatry and medicine, and preserving information for reproducible research and knowledge generation. He is a PI of the DANDI project, a BRAIN Initiative archive and collaboration space for cellular neurophysiology, of Nobrainer, a framework for deep learning applications in neuroimaging, and of the BICAN knowledgebase effort to curate information around cellular atlases across species. He was one of the lead architects of Nipype, a workflow platform that supports the neuroimaging community, a member of the INCF data sharing taskforce that was instrumental in the initial development of the BIDS standard, and a contributor to many opensource software projects. He is a strong proponent of open and collaborative science.

Chethan Pandarinath, PhD
When: Friday, March 31st, 2023, 10am-11am EST
Title:  Uncovering neural population dynamics: applications to basic science and brain-machine interfaces

Abstract: Large-scale recordings of neural activity have created new opportunities to study network-level dynamics in the brain in unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering these dynamics, interpreting them, and harnessing them for therapeutic applications. Our group has developed new machine learning methods to uncover dynamics from recordings of neural population activity on millisecond timescales. I will demonstrate how these methods can be applied to data from diverse brain areas and behavioral tasks, without regard to behavior. I will also discuss how these approaches can be harnessed to improve brain-machine interfaces for people with paralysis.

Bio:  Dr. Pandarinath is an assistant professor in the Coulter Department of Biomedical Engineering at Emory University and Georgia Tech and the Department of Neurosurgery at Emory, where he directs the Systems Neural Engineering Lab. His group’s research uses electrical engineering principles and AI toward studying the nervous system and designing assistive devices for people with neurological disorders or injuries.

 

Dr. Pandarinath received undergraduate degrees in Computer Engineering, Physics, and Science Technology and Society from North Carolina State University. During his PhD in EE at Cornell, his research focused on the early visual system and creating novel retinal prosthetic approaches to restore vision. His postdoc at Stanford with Jaimie Henderson and Krishna Shenoy, as a part of the BrainGate team, focused on improving the performance of brain-machine interfaces to restore function to people with paralysis. He is a 2019 Sloan Fellow and K12 Scholar in the NIH-NICHD Rehabilitation Engineering Career Development Program. He is also a recipient of the 2021 NIH Director’s New Innovator Award. His work has been funded by the Neilsen Foundation, NSF, DARPA, Burroughs Wellcome Fund, Simons Foundation, and NIH.

Odelia Schwartz, PhD
When: Friday, March 24th, 2023, 10am-11am EST
Title:  Contextual effects in the visual brain and artificial systems

Abstract: Neural responses and perception of visual inputs strongly depend on the spatial context, such as what surrounds a given object or feature. I will discuss our work on developing a visual cortical model based on the hypothesis that neurons represent inputs in a coordinate system that is matched to the statistical structure of images in the natural environment. The model generalizes a nonlinear computation known as divisive normalization, that is ubiquitous in neural processing, and can capture some spatial context effects in cortical neurons. I will further discuss how we are incorporating such nonlinearities and studying contextual effects in deep neural networks.

Bio: Odelia Schwartz is an Associate Professor in the Department of Computer Science at the University of Miami. Her research is at the intersection of the brain sciences and machine learning. A main focus has been understanding how the brain makes sense of visual information in the world. She received a Ph.D. from the Center for Neural Science at New York University and an MS in Computer Science at the University of Florida. She did her postdoctoral research at the Salk Institute, and was previously an Assistant Professor at Albert Einstein College of Medicine. 

Screen Shot 2023-02-24 at 4.37.41 PM.png
Nicholas Chia, PhD
When: Friday, March 3rd, 2023, 10am-11am EST
Title:  Inverse Reinforcement Learning for Predicting and Understanding Cancer Evolution

Abstract: Molecular biology information is at the heart of the core biological processes of evolution and adaptation. The evolutionary process drives a number of important disease processes from viral evolution to cancer progression. Approaches such as Bayesian methods, Markov chain modelling, and machine learning (ML) techniques, typically only characterize a small number of either well-defined or arbitrarily-defined stages in the disease process and predict the gross outcomes such as survival or make binary classification (e.g., drug responders vs. non-responders). They are not designed to unravel the complexity of the entire evolutionary process that gradually drives evolutionary progression via individual mutations. In contrast, inverse reinforcement learning (IRL) algorithms closely parallel the step-by-step accumulation of genomic alterations in lineage evolution. IRL is a specific form of machine learning from demonstrations that estimates the reward function of a Markov decision process from examples provided by expert demonstrations. This talk will highlight an example of ongoing work to prototypes an articulate IRL model of the molecular evolution of colorectal cancer.

Bio: Dr. Nicholas Chia is Bernard and Edith Waterman Co-Director of the Microbiome Program and Director of the Beyond DNA Theme in the Center for Individualized Medicine. He is a Senior Associate Consultant in the Department of Surgery with a Joint Appointment in Health Sciences Research. He is an Associate Professor in the Departments of Surgery and Laboratory Medicine and Pathology.  Dr. Chia also serves as an adjunct professor of Biomedical Informatics and Computational Biology at the University of Minnesota, Rochester (MN). Dr. Chia received his B.S. in Physics in 2001 from Georgetown University and his Ph.D. in Biophysics at Ohio State University in 2006. He moved on to a postdoctoral position at the University of Illinois before coming to the Mayo Clinic to help lead the Microbiome Program. He now holds leadership positions at Mayo Clinic including the Co-director of the Microbiome Program and the Director of Beyond DNA, within the Center for Individualized Medicine. During his time as a researcher, he has had the privilege of working alongside distinguished members of the National Academy of Science.

Screen Shot 2023-01-18 at 11.13.54 AM.png
Veronika Cheplygina, PhD
When: February 17th ,2023, 10am-11am ET
Title: Curious findings about public medical imaging datasets

Abstract: Medical imaging is an important research field with many opportunities for improving patients' health. However, there are a number of challenges that are slowing down the progress of the field as a whole. In this talk I discuss several problems which occur when we as researchers make decisions about choosing datasets, methods and evaluation metrics, highlighted in our recent paper "Machine learning for medical imaging: methodological failures and recommendations for the future”. I will then zoom into datasets in particular, and discuss some ongoing work about some peculiarities of publicly available datasets.

Bio: Dr. Veronika Cheplygina's research focuses on limited labeled scenarios in machine learning, in particular in medical image analysis. She received her Ph.D. from Delft University of Technology in 2015. After a postdoc at the Erasmus Medical Center, in 2017 she started as an assistant professor at Eindhoven University of Technology. In 2020, failing to achieve various metrics, she left the tenure track of search of the next step where she can contribute to open and inclusive science. In 2021 she started as an associate professor at IT University of Copenhagen. Next to research and teaching, Veronika blogs about academic life at https://www.veronikach.com. She also loves cats, which you will often encounter in her work.

Screen Shot 2023-01-18 at 11.13.19 AM.png
Daniel Yamins, PhD - Assistant Professor of Psychology and Computer Science, Stanford University
When: February 3rd, 2023, 10am-11am ET
Title: Beyond ConvNets: Deepening Our Computational Understanding of Neural Systems

Abstract: I will begin by discussing advances in unsupervised learning, and how they spur improvements over older categorization-based convnets as models of the visual system.  I'll show how these models can be extended to describe the emergence of functional organization throughout the visual pathway, and better allow us to understand the existence (or lack thereof) of multiple visual streams.  I'll also describe recent state-of-the-art approaches to visual scene understanding, which leverage ideas from cognitive science and developmental psychology in building better artificial intelligence.  I'll close with some thoughts about the philosophy of computational neuroscience in the age of AI. 

Bio:  I'm a computational neuroscientist at Stanford University, where I'm an assistant professor of Psychology and Computer Science, and a faculty scholar at the Wu Tsai Neurosciences Institute.   I work on science and technology challenges at the intersection of neuroscience, artificial intelligence,  psychology and large-scale data analysis.  The brain is the embodiment of the most beautiful algorithms ever written.  My research group, the Stanford NeuroAILab, seeks to "reverse engineer" these algorithms, both to learn both about how our minds work and build more effective artificial intelligence systems.    

Screen Shot 2022-09-02 at 1.57.35 PM.png
Guorong Wu, PhD - Assistant Professor, Department of Psychiatry, UNC School of Medicine
When: January 27th, 2023, 10am-11am ET
Title: Discovering Novel Mechanisms for Alzheimer’s Disease by Machine Learning

Abstract: We are now in the era of big data, which allows us to answer biomedical questions today that we couldn’t answer before. As a computer scientist, this is the most exciting time in my entire career. In the last ten years, I have been collaborating with neurology, neuroscience, genetics, and imaging experts to understand the pathophysiological mechanism of Alzheimer’s disease (AD) and how AD-related genes affect aging brains. Specifically, my lab is interested in establishing a neurobiological basis to quantify the structural/functional/behavioral difference across individuals and discover reliable and putative biomarkers that will allow us to come up with personalized therapy and treatment for individuals. In this talk, I would like to share my experience of integrating the domain knowledge of neuroscience into the development of imaging-AI based computational tools for automated image analysis, image interpretation, and outcome prediction, with a focus on imaging biomarkers and the computer-assisted early diagnostic engine for AD. At the end of this talk, I will demonstrate the preliminary results of recent research projects where we aim to understand the propagation mechanisms of tau aggregates.

Bio: Dr. Wu is currently an Associate Professor at the Department of Psychiatry. He also holds a joint appointment at the Department of Computer Science, Statistics and Operations Research, UNC Neuroscience Center, and Carolina Institute of Developmental Disabilities.

Dr. Wu is interested in developing advanced computational tools and data-driven methods to understand how the human brain works and discover high sensitivity and specificity biomarkers for neuro-diseases such as Alzheimer’s disease. His current research projects include quantifying brain development using 3D cellular resolution imaging, brain network analyses, and computer-assisted intervention/diagnosis focusing on Alzheimer’s disease.

Screen Shot 2022-11-16 at 12.45.00 PM.png
Weinan Sun, PhD - Research Scientist, Janelia Research Campus
When: December 9th, 2022, 10:00-11:00 am EST
Title: Longitudinal imaging of thousands of hippocampal neurons reveals emergence of internal models that parallel changes in behavioral strategy 

Abstract: The hippocampal formation is essential for an animal’s ability to navigate and forage effectively in complex environments. It contributes by forming structured representations of the environment, often called cognitive maps. While many experimental and theoretical aspects of learned hippocampal cognitive maps are well established, the exact learning trajectories for their formation and usage remain unknown. We performed large-scale 2-photon calcium imaging of more than 5,000 neurons in mouse CA1 and tracked neural activity of the same neurons over 30 days, while the animals learned multiple versions of a linear two-alternative choice task in virtual reality (VR) environment. We used various manifold discovery techniques to visualize the high-dimensional neural data over the entire learning period and found that each animal went through a stereotyped transition of learning stages, demarcated by distinct low-dimensional embeddings and decorrelations of neural activity at key positions along the VR track, which correlated with task performance. Our results indicate that the evolution of hippocampal representations during learning reflects the extraction of task-related features that correlate temporally with the animal’s evolving performance. Furthermore, the learned structures appear to be reused in novel tasks, suggestive of transfer learning. By designing and simulating artificial agents based on reinforcement learning, we found that some architectures reproduced key features of both animal behavior and neural activity. The ability to monitor the formation of cognitive maps over weeks-long periods of learning provides a platform for developing and testing hypotheses regarding the underlying plasticity mechanisms, cell types, circuits, and computational rules responsible for adaptive learning.

Bio: With expertise in cellular and systems neuroscience as well as machine learning, Dr. Sun aims to: (1) better understand the biological underpinnings of animal cognition and intelligent behavior, and (2) use this understanding to improve AI systems. His ongoing work involves using deep learning theories to understand how declarative memories are transformed over time as well as using large-scale 2-photon mesoscopic imaging methods to record thousands of neurons in rodents engaged in learning tasks. 

Screen Shot 2022-11-16 at 12.41.13 PM.png
Gitta Kutyniok, PhD - Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at the Ludwig-Maximilians Universität München
When: December 2nd, 2022, 10:00-11:00 am EST
Title: Reliable AI in Medical Imaging: Successes, Challenges, and Limitations

Abstract: Deep neural networks as the current work horse of artificial intelligence have already been tremendously successful in real-world applications, ranging from science to public life. The area of (medical) imaging sciences has been particularly impacted by deep learning-based approaches, which sometimes by far outperform classical approaches for particular problem classes. However, one current major drawback is the lack of reliability of such methodologies.

In this lecture we will first provide an introduction into this vibrant research area. We will then present some recent advances, in particular, concerning optimal combinations of traditional model-based methods with deep learning-based approaches in the sense of true hybrid algorithms. Due to the importance of explainability for reliability, we will also touch upon this area by highlighting an approach which is itself reliable due to its mathematical foundation. Finally, we will discuss fundamental limitations of deep neural networks and related approaches in terms of computability, and how these can be circumvented in the future, which brings us in the world of quantum computing

Bio: Gitta Kutyniok currently holds a Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at the Ludwig-Maximilians Universität München. She received her Diploma in Mathematics and Computer Science as well as her Ph.D. degree from the Universität Paderborn in Germany, and her Habilitation in Mathematics in 2006 at the Justus-Liebig Universität Gießen. From 2001 to 2008 she held visiting positions at several US institutions, including Princeton University, Stanford University, Yale University, Georgia Institute of Technology, and Washington University in St. Louis, and was a Nachdiplomslecturer at ETH Zurich in 2014. In 2008, she became a full professor of mathematics at the Universität Osnabrück, and moved to Berlin three years later, where she held an Einstein Chair in the Institute of Mathematics at the Technische Universität Berlin and a courtesy appointment in the Department of Computer Science and Engineering until 2020. In addition, Gitta Kutyniok holds an Adjunct Professorship in Machine Learning at the University of Tromso since 2019. Gitta Kutyniok has received various awards for her research such as an award from the Universität Paderborn in 2003, the Research Prize of the Justus-Liebig Universität Gießen and a Heisenberg-Fellowship in 2006, and the von Kaven Prize by the DFG in 2007. She was invited as the Noether Lecturer at the ÖMG-DMV Congress in 2013, a plenary lecturer at the 8th European Congress of Mathematics (8ECM) in 2021, the lecturer of the London Mathematical Society (LMS) Invited Lecture Series in 2022, and an invited lecturer at both the International Congress of Mathematicians 2022 (ICM 2022) and the International Congress on Industrial and Applied Mathematics 2023 (ICIAM 2023). Moreover, she became a member of the Berlin-Brandenburg Academy of Sciences and Humanities in 2017, a SIAM Fellow in 2019, and a member of the European Academy of Sciences in 2022. In addition, she was honored by a Francqui Chair of the Belgian Francqui Foundation in 2020. She was Chair of the SIAM Activity Group on Imaging Sciences from 2018-2019 and Vice Chair of the new SIAM Activity Group on Data Science in 2021, and currently serves as Vice President-at-Large of SIAM. She is also the spokesperson of the Research Focus "Next Generation AI" at the Center for Advanced Studies at LMU, and serves as LMU-Director of the Konrad Zuse School of Excellence in Reliable AI. Gitta Kutyniok's research work covers, in particular, the areas of applied and computational harmonic analysis, artificial intelligence, compressed sensing, deep learning, imaging sciences, inverse problems, and applications to life sciences, robotics, and telecommunication.

Screen Shot 2022-09-02 at 1.56.39 PM.png
George Chen, PhD - Assistant Professor of Information Systems, Heinz College
Affiliated Faculty, Machine Learning Department, Carnegie Mellon University
When: October 14th, 2022, 10:00-11:00 am EDT
Title: Survival Kernets: Scalable and Interpretable Deep Kernel Survival Analysis with an Accuracy Guarantee

Abstract: Survival analysis is about modeling how much time will elapse before a critical event occurs. Examples of such critical events include death, disease relapse, readmission to the hospital, and awakening from a coma. Recent machine learning advances in survival analysis have largely focused on architecting deep neural networks to achieve state-of-the-art prediction accuracy, with very little focus on whether the learned models are easy for application domain experts to interpret. In this talk, I present a new scalable deep kernel survival analysis model that has prediction accuracy competitive with the state-of-the-art but also aims to be interpretable and comes with a statistical accuracy guarantee. This model automatically learns a similarity score between any two data points (e.g., patients) and also represents each data point as a combination of exemplar training points, which could be thought of as clusters. These clusters can be visualized in terms of raw features and survival outcomes. I show experimental results on healthcare survival analysis datasets that are on predicting time until death for patients with various diseases.

Bio: George H. Chen is an assistant professor at Carnegie Mellon University's Heinz College of Information Systems and Public Policy. He primarily works on building trustworthy machine learning models for time-to-event prediction (survival analysis) and for time series analysis. He often uses nonparametric prediction models that work well under very few assumptions on the data. His main application area is in healthcare. George completed his PhD in Electrical Engineering and Computer Science at MIT, where he won the George Sprowls award for outstanding PhD thesis in computer science and the Goodwin Medal, the top teaching award given to graduate students. He is a recipient of an NSF CAREER award and has also co-founded a startup CoolCrop that provides cold storage and marketing analytics to rural farmers in India.

enzo-tagliazucchi (1).jpg
Enzo Tagliazucchi, PhD - Director of the Consciousness, Culture, and Complexity Lab, Latin American Brain Health Institute and Faculty of Exact and Natural Sciences, UBA
When: October 7th, 2022, 10:00-11:00 am EDT
Title: Exploring whole-brain dynamics with computational models and variational autoencoders

Abstract: Whole-brain dynamics are high dimensional and thus difficult to grasp by intuitive means. This difficulty is compounded with other limitations of human neuroimaging, such as the impossibility of studying causal interventions and assessing mechanistic hypotheses. We show how the combination of whole-brain modeling with deep variational autoencoders can be used to simultaneously alleviate these issues, yielding interpretable visualizations of global brain states as well as possible mechanisms underlying their emergence and stability. In particular, we show applications to neurodegenerative diseases and states of consciousness.

Bio: I studied physics at the University of Buenos Aires and obtained my PhD in neuroscience at the University of Frankfurt, Germany. I was awarded an AXA postdoctoral fellowship, a Marie Curie individual fellowship and a Mercator Fellowship from the DFG, and worked as researcher at the Netherlands Institute for Neuroscience and the Brain and Spine Institute in Paris, France. My main topic of interest is the neuroscience of healthy and pathological brain states, including states characterized by full or partial loss of consciousness and self-awareness, such as sleep, anesthesia, dementia and disorders of consciousness. I also lead a multidisciplinary group of scientists working at the interface between theoretical and computational neuroscience, machine learning, data science and computational neuropsychiatry, areas from which I expect to contribute my expertise to BrainLat, its members and its mission.

Screen Shot 2022-01-05 at 11.47.09 AM.png
Smita Krishnaswamy, PhD - Associate Professor of Genetics and Computer Science, Yale School of Medicine
When: May 27th, 2022, 9:45-11:00 am
Title: Deep Geometric and Topological Representations for Extracting Insights from Biomedical Data

Abstract: High-throughput, high-dimensional data has become ubiquitous in the biomedical sciences because of breakthroughs in measurement technologies. These large datasets, containing millions of observations of cells, molecules, brain voxels, and people, hold great potential for understanding the underlying state space of the data,  as well as drivers of differentiation, disease, and progression. However, they pose new challenges in terms of noise, missing data, measurement artifacts, and the “curse of dimensionality.” In this talk, I will show how to leverage data geometry and topology, embedded within modern machine learning frameworks, to understand these types of complex scientific data.  First, I will use data geometry to obtain representations that enable denoising, dimensionality reduction,  and visualization.  Next, I will show how to combine diffusion geometry with topology to extract multi-granular features from the data for predictive analysis. Then, I will move up from the local geometry of individual data points to the global geometry of data clouds and graphs, using graph signal processing to derive representations of these entities and optimal transport for distances between them. Finally, I will demonstrate how two neural networks use geometric inductive biases for generation and inference: GRASSY (geometric scattering synthesis network) for generating new molecules and molecular fold trajectories, and  TrajectoryNet for performing dynamic optimal transport between time-course samples to understand the dynamics of cell populations. Throughout the talk, I will include examples of how these methods shed light on the inner workings of biomedical and cellular systems including cancer, immunology and neuroscientific systems.. I will finish by highlighting future directions of inquiry.

Bio: Smita Krishnaswamy is an Associate Professor in the department of Genetics and Computer Science at Yale, a core member of the Program in Applied Mathematics, Computational Biology and Interdisciplinary Neuroscience. She is also affiliated with the Yale Center for Biomedical Data Science, Yale Cancer Center, and Wu-Tsai Institute. Smita’s research focuses on developing deep representation learning methods that use mathematical concepts from manifold learning, data geometry, topology and signal processing. to denoise, impute, visualize and extract structure, patterns and relationships from big, high throughput, high dimensional biomedical data. Her methods have been applied to a variety of datasets from many systems in cancer biology, immunology, neuroscience, and structural biology.

Smita teaches three courses: Machine Learning for Biology (Fall), Deep Learning Theory and applications (spring), Advanced Topics in Machine Learning & Data Mining (Spring). She completed her postdoctoral training at Columbia University in the systems biology department where she focused on learning computational models of cellular signaling from single-cell mass cytometry data. She was trained as a computer scientist with a Ph.D. from the University of Michigan’s EECS department where her research focused on algorithms for automated synthesis and probabilistic verification of nanoscale logic circuits. Following her time in Michigan, Smita spent 2 years at IBM’s TJ Watson Research Center as a researcher in the systems division where she worked on automated bug finding and error correction in logic.

qingyuzhao.png
Qingyu Zhao, PhD - Instructor, Department of Psychiatry and Behavioral SciencesStanford University
When: May 13th, 2022, 9:45-11:00 am
Title: Confounder-aware Deep Learning Models for Neuroimaging Applications

Abstract: The presence of confounding effects is one of the most critical challenges in applying deep learning techniques to medical applications. Confounders are extraneous variables that influence both input and output variables and thereby can easily distort the training and interpretation of deep learning models. How to remove confounding effects is a widely explored topic in traditional statistical research but is largely overlooked in the surge of deep learning applications as researchers put more attention on designing deeper and more powerful network architectures.  In this talk, I will summarize our recent efforts in modeling confounding effects in deep learning models in the context of neuroimaging studies. I will first review a common practice in traditional (non-deep) machine learning studies that use general linear models to regress out confounding effects from deterministic features. Then I will discuss how to translate this residualization concept to the deep learning setting where the features are learned dynamically in an end-to-end fashion. Lastly, I will highlight the strength of these new approaches in deriving confounder-free latent representations of MRI data, correcting feature distributions with respect to multiple confounding variables, and generating unbiased interpretations of the model.

 

Bio: Dr. Zhao is an instructor in the Department of Psychiatry and Behavioral Sciences at Stanford University. He obtained his Ph.D. in computer science in 2017 from the University of North Carolina at Chapel Hill and was a postdoc and research scientist in the Stanford Psychiatry department. His research has been focusing on identifying biomedical phenotypes associated with neuropsychiatric disorders by statistical and machine-learning-based computational analysis of neuroimaging and neuropsychological data. Dr. Zhao is a recipient of the K99/R00 Pathway to Independence Award from the National Institute on Alcohol Abuse and Alcoholism.

Screen Shot 2022-01-05 at 11.51.49 AM.png
Ruby Kong, PhD - Postdoctoral Fellow in the Computational Brain Imaging Group at the National University of Singapore
When: April 22nd, 2022, 9:45-11:00 am
Title: Individual-specific parcellations for resting-state functional connectivity behavioral prediction
Watch a video of the talk here.

Abstract: There has been significant interest in using resting-state functional connectivity to predict behavior. Most studies have utilized functional connectivity from group-level parcellations for predicting behavior. Here, we proposed models for estimating network-level and areal-level cortical parcellations. Using data from multiple datasets, we compare our individual-specific parcellations with the group-level parcellations and other individual-specific parcellation techniques. We further explored comparisons with other forms of data representations, i.e. gradients and soft-parcellations for predicting cognitive, personality and emotional measures in individuals. Several applications of individual-specific parcellations will also be discussed.

Bio: Ru(by) Kong is a postdoctoral fellow of Thomas Yeo in the Computational Brain Imaging Group at the National University of Singapore. Her research focuses on developing machine learning algorithms to study individual differences in brain organization from fMRI and exploring their relationship with human behaviors. She received her Ph.D. from the Electrical and Computer Engineering department at National University of Singapore.

Screen Shot 2022-01-05 at 11.50.38 AM.png
Narges Razavian, PhD - Assistant Professor, Department of Population Health and Department of Radiology, NYU Langone Health
When: April 15th, 2022, 9:45-11:00 am
Title: New Frontiers of Self-Supervised Learning in Medical Imaging

Abstract: Self supervised learning (SSL) methods enables building powerful imaging features from unlabeled data. These methods can be particularly useful in medical domain where available datasets for many conditions can be inherently small. In this talk, we will focus on two novel work from my research group on core innovations in the methods and applications of SSL. Specifically, we will discuss SSL in learning histopathology imaging models for survival prediction in squamous cell lung cancer, diabetic retinopathy identification, and chest XRay classification. We will end the talk by a discussion of potentials of SSL methods in leveraging other data modalities (such as the electronic health records and medical notes) for learning stronger imaging models that rely on few labeled data. This talk is based on our two recent papers at NeurIPS21 and MIDL22.

Bio: Narges Razavian is an assistant professor at NYU Langone Health, Center for Healthcare Innovation and Delivery Sciences, and Predictive Analytics Unit. Her lab focuses on design of novel AI/ML methods and various applications of AI/ML methods to medical domain, with a clinical translation outlook. She leads projects that involve Medical Images, Clinical Notes, and Electronic Health Records. Before NYU Langone, she was a postdoc at CILVR lab at NYU Courant CS department. She received her PhD at CMU Computational Biology group. 

Screen Shot 2022-01-05 at 11.48.46 AM.png
Carlos Ponce, MD, PhD - Assistant Professor of Neurobiology, Harvard Medical School
When: April 8th, 2022, 9:45-11:00 am
Title: Cortical neurons as similarity machines: insights from machine learning

Abstract: Humans and other primates can understand images with very different statistical properties, such as photographs of natural scenes, art, and even computer-generated landscapes. However, it is not well-understood how visual cortex neurons allow this robust perceptual capacity. We will explore the view that this capacity arises from V1, V4 and inferotemporal cortex (IT) neurons acting as similarity machines, comparing incoming visual information to prototypes learned and/or refined through experience. This talk will illustrate how this perspective can aid in our explanations of primate behavior and how cortical neurons might function "in the wild." 

 

Bio: Carlos Ramon Ponce is an Assistant Professor in the Department of Neurobiology at Harvard Medical School. He studies vision in the primate brain, specifically focusing on how multiple cortical areas concurrently encode and transform visual information. He uses a combination of in vivo electrophysiology, behavioral tasks and machine learning models to carry out his research program. He served as an Assistant Professor in the Department of Neuroscience at Washington University School of Medicine, and is a recipient of the Packard Fellowship for Science and Engineering. He received his M.D.-Ph.D. from the Health Sciences and Technology program at Harvard Medical School and MIT.

Jim DiCarlo Photo.jpg
Jim DiCarlo, PhD - Peter De Florez Professor of Neuroscience, head of the Department of Brain and Cognitive Sciences, and McGovern Institute for Brain Research Investigator at the Massachusetts Institute of Technology
When: March 25th, 2022, 9:45-11:00 am
Title: Deep network models of the deep network mechanisms of (part of) human visual
intelligence

Abstract: The human species is embarking on a great scientific quest — to understand the neural mechanisms of human intelligence.  Recent progress in multiple subfields of brain research suggests that key next steps in this quest will result from building systems-level network models that aim to abstract, emulate and explain the mechanisms underlying natural intelligent behavior.  In this talk, I will briefly review how neuroscience, cognitive science and computer science converged to create specific, deep neural network models intended to appropriately abstract, emulate and explain the mechanisms of primate visual object recognition.  Based on a large body of primate neurophysiological and behavioral data, some of these network models are currently the leading (i.e. most accurate) scientific theories of the internal mechanisms of the primate ventral visual stream and how those mechanisms support the foundation of visual intelligence: the ability of humans and other primates to rapidly and accurately infer latent world content (e.g. object identity, position, pose, etc.) from the set of pixels in most natural images.  While still incomplete, these leading scientific models have many uses in brain science and beyond.  In this talk, I will highlight one particular use: the design of patterns of light energy on the retina (i.e. new images) that neuroscientists can use to precisely modulate neuronal activity deep in the brain.   Our most recent experimental work suggests that, when targeted in this new way, the responses of individual high-level primate neurons are exquisitely sensitive to barely perceptible image modifications.   While surprising to many neuroscientists — ourselves included — this result is in line with the predictions of the current leading scientific models (above), it offers guidance to contemporary computer vision research, and it suggests a currently untapped non-pharmacological avenue to approach clinical interventions.

Bio: Dr. DiCarlo was named Investigator at the M.I.T. McGovern Institute for Brain Research and Assistant Professor in the Department of Brain and Cognitive Sciences in 2002, and was promoted to full Professor in 2012 and served as Department Head from 2012-2021. He was named MIT’s Peter de Florez endowed professor in 2015 and Director of MIT Quest for Intelligence in 2021. He received his M.D. and Ph.D. in Biomedical Engineering from Johns Hopkins University in 1998 and did his postdoctoral work at Baylor College of Medicine from 1998 to 2002. His research group is focused on understanding the neuronal representations and computational mechanisms that underlie visual object recognition in primates.

Screen Shot 2021-07-29 at 11.44_edited.jpg
Michel Thiebaut de Schotten, PhD - University of Bordeaux
When: March 11th, 2022, 9:45-11:00 am
Title: Brain Connectivity and Behaviour

Abstract: We commonly call gray matter the outer layer of the brain (or cerebral cortex) devoted to the most integrated cerebral functions, such as visuospatial, language or memory skills. The cerebral cortex is constituted of the cell bodies of neurons, giving it its eponymous colour. Just as good communication between individuals is essential for the functioning of a society, good communication between cortical regions is essential for brain function. In the brain, communication is enabled by tract-like extensions of neurons -- axons -- which group together in bundles to connect the different brain regions together, some of which reaching a length over 20 cm. True communication channels, in connecting the functions of several brain regions, these white matter bundles allow the creation of new, more complex functions similarly to a group of letters that makes a full word with a specific meaning.
For a long time, research in human neuroscience has focused on the study of brain functions associated with cortical regions. Brain imaging techniques developed in the early 2000s, such as functional MRI for example, have made it possible to map brain functions (language, logic, memory, etc.) on the surface of the cortex. But we can clearly see the limits of this approach, which does not allow us to understand the relationships between different cortical regions involved in the same function. With the advent of new imaging techniques making it possible to model white matter tracts (for the past ten years or so), neuroscience is entering a new era where the anatomical support of brain functions is no longer considered only as a collection of regions on the surface of the brain, but as a network of interconnected nodes communicating with each other. Based on one of the largest collection of brain damaged by stroke (1333 patients) combined with the most comprehensive meta-analysis database in neuroimaging (Neurosynth) and the best current white matter mapping derived from the "Human Connectome 7T ”, has produced the first-ever functional white matter atlas, which alone maps more than 500 different functions in the brain. This is a major conceptual and epistemological advance in human neuroscience since cerebral functions are no longer defined a priori and sought only in the cerebral cortex, but they now emerge from the in-depth analysis of white matter networks conceived as functional territories defined by their connectivity. By placing itself at the interface of basic research and medical research, this atlas promises to be an essential tool for exploring new brain functions and their circuits as well as for identifying typical stroke lesions which interrupt the circuits of the brain white matter for given functional activation networks

Bio: Michel Thiebaut de Schotten is Director of Research at CNRS, Chair of the Organization for Human brain mapping (4000 neuroimaging members), Editor in Chief of the peer review journal Brain Structure & Function, ERC Consolidator Grantee, and Head of the Department of Neurofunctional Imaging in Bordeaux (GIN) and the Brain Connectivity Behaviour laboratory in Paris (BCBlab).
His work that includes >100 peer review articles spans the whole gamut from novel neuroimaging methodologies to experimental work to theory. Critically, he dedicates significant effort toward the clinical translation of his work through an open model approach that makes his tools freely accessible to the community. His report published in Science (2005) showed the first demonstration in humans that hemispatial neglect could be reversibly produced by disconnecting the white matter. Today, operating rooms worldwide use his assessment to prevent spatial attention deficits after surgery. Subsequently, he mapped white matter anatomy in the healthy human living brain through a series of influential studies, which led to the publication of the Atlas of the Human Brain Connections. He developed the BCBtoolkit software suite, a set of programs for computing disconnections made freely available to the scientific and clinical communities. Recently, he has explored the role of white matter connections in the definition of functional areas. His most recent findings reaffirm a basic premise of neurobiology, i.e., that brain connectivity defines the function and provides a reliable tool for segmenting the cortex into meaningful units to study development and disease in living subjects. Most recently, he published his first Atlas of the function of white matter as well as a new software the functionnectome that unravel the contribution of white matter circuits to function.

BioPic2_small.jpeg
Pierre Elias, MD - Cardiology Fellow at Columbia Medicine and New York Presbyterian Hospital
When: March 4th, 2022, 9:45-11:00 am
Title: Machine Learning Applications in Cardiology

Abstract: We will discuss why and how deep learning approaches have the potential to greatly impact cardiac imaging. We will also explore use cases developed at Columbia that have led to two of the world’s first prospective clinical trials of deep learning in cardiology. Finally, we will critique the limitations of current ML approaches preventing mainstream adoption to answer: What are some of the big problems the field needs to tackle now so that machine learning makes it to the bedside?

Bio: Pierre Elias, MD is a cardiology fellow at Columbia University. His lab focuses on the
development of machine learning applications for early disease detection using cardiac
imaging. He was previously a data scientist at Lumiata, helping develop Google’s Knowledge
Graph for Health. His research focuses on deep learning using cardiac imaging. He was recently named a STAT News Wunderkind, highlighting 25 of the most promising junior researchers around the country.

Screen Shot 2022-01-27 at 9.18.41 AM.png
Leila Wehbe, PhD - Assistant Professor Machine Learning Department & Neuroscience Institute, Carnegie Mellon University
When: February 18th, 2022, 9:45-11:00 am
Title: Behavior measures are predicted by how information is encoded in an individual's brain

Abstract: Similar to how differences in the proficiency of the cardiovascular and musculoskeletal system predict an individual's athletic ability, differences in how the same brain region encodes information across individuals may explain their behavior. However, when studying how the brain encodes information, researchers choose different neuroimaging tasks (e.g., language or motor tasks), which can rely on processing different types of information and can modulate different brain regions. We hypothesize that individual differences in how information is encoded in the brain are task-specific and predict different behavior measures. We propose a framework using encoding-models to identify individual differences in brain encoding and test if these differences can predict behavior. We evaluate our framework using task functional magnetic resonance imaging data. Our results indicate that individual differences revealed by encoding-models are a powerful tool for predicting behavior, and that researchers should optimize their choice of task and encoding-model for their behavior of interest. The paper is available at https://arxiv.org/pdf/2112.06048.pdf

Screen Shot 2022-01-05 at 11.39.10 AM.png
David van Dijk, PhD - Assistant Professor of Medicine and Computer Science, Van Dijk lab, Yale
When: February 4th, 2022, 9:45-11:00 am
Title: Learning hidden signatures across space and time in biomedical data

Abstract: New measurement technologies, including large sequencing and imaging datasets, contain a wealth of information that promises unparalleled insight into biology. However, our ability to model and analyze this information is currently limited by the high dimensionality and large sample numbers inherent to these datasets. In this talk, I will present a number of recently developed algorithms that can discover hidden signatures in large biomedical datasets, including brain imaging and single-cell sequencing data. These algorithms are inspired by manifold learning, deep learning, and natural language processing, and provide new representations of data and spatiotemporal patterns that allow meaningful insight into the underlying biology.

Bio: Dr. David van Dijk completed his PhD at the University of Amsterdam and the Weizmann Institute of Science in Computer Science and Computational Biology where he used machine learning to decipher links between DNA sequence and gene activity. Dr. van Dijk moved on to  postdoctoral fellow positions at Columbia University and Yale University where he developed manifold learning and machine learning algorithms for single-cell RNA sequencing data. He has developed widely used computational tools for the biomedical community, including MAGIC, the first single-cell data imputation method, and PHATE, a dimensionality reduction and visualization method. He is currently an Assistant Professor in the departments of Internal Medicine and Computer Science at Yale University where his lab specializes in developing  machine learning algorithms capable of analyzing large biomedical datasets, including single-cell RNA sequencing, health records, medical imaging, and neural activity data. Dr. van Dijk is recipient of the Dutch Research Council Rubicon fellowship and the NIH R35 Maximizing Investigators' Research Award.

Screen Shot 2021-08-23 at 2.56.49 PM.png
Andreas Maier, PhD - Head of the Pattern Recognition Lab of the Friedrich-Alexander-Universität Erlangen-Nürnberg
When: December 17th, 9:45-11:00 am
Title: Known Operator Learning - An Approach to Unite Machine Learning, Physics, and Signal Processing

Abstract: We describe an approach for incorporating prior knowledge into machine learning algorithms. We aim at applications in physics and signal processing in which we know that certain operations must be embedded into the algorithm. Any operation that allows computation of a gradient or sub-gradient towards its inputs is suited for our framework. We derive a maximal error bound for deep nets that demonstrates that inclusion of prior knowledge results in its reduction. Furthermore, we show experimentally that known operators reduce the number of free parameters. We apply this approach to various tasks ranging from computed tomography image reconstruction over vessel segmentation to the derivation of previously unknown imaging algorithms. As such, the concept is widely applicable for many researchers in physics, imaging and signal processing. We assume that our analysis will support further investigation of known operators in other fields of physics, imaging and signal processing.

Biography: Prof. Dr. Andreas Maier was born on 26th of November 1980 in Erlangen. He studied Computer Science, graduated in 2005, and received his PhD in 2009. From 2005 to 2009 he was working at the Pattern Recognition Lab at the Computer Science Department of the University of Erlangen-Nuremberg. His major research subject was medical signal processing in speech data. In this period, he developed the first online speech intelligibility assessment tool – PEAKS – that has been used to analyze over 4.000 patient and control subjects so far. From 2009 to 2010, he started working on flat-panel C-arm CT as post-doctoral fellow at the Radiological Sciences Laboratory in the Department of Radiology at the Stanford University. From 2011 to 2012 he joined Siemens Healthcare as innovation project manager and was responsible for reconstruction topics in the Angiography and X-ray business unit.
In 2012, he returned the University of Erlangen-Nuremberg as head of the Medical Reconstruction Group at the Pattern Recognition lab. In 2015 he became professor and head of the Pattern Recognition Lab. Since 2016, he is member of the steering committee of the European Time Machine Consortium. In 2018, he was awarded an ERC Synergy Grant “4D nanoscope”.  Current research interests focuses on medical imaging, image and audio processing, digital humanities, and interpretable machine learning and the use of known operators.

Screen Shot 2021-08-23 at 7.23.20 AM.png
Archana Venkataraman - John C. Malone Assistant Professor, Johns Hopkins Whiting School of Engineering
When: November 19th, 9:45-11:00 am
Title: Deep Imaging-Genetics to Parse Neuropsychiatric Disorders

Abstract: Neuropsychiatric disorders, such as autism and schizophrenia, have two complementary viewpoints. On one hand, they are linked to cognitive and behavioral deficits via altered neural functionality. On the other hand, these disorders exhibit high heritability, meaning that deficits may have a genetic underpinning. Identifying the biological basis between the genetic variants and the heritable phenotypes remains an open challenge in the field. This talk will showcase two modeling frameworks that use deep learning to integrate neuroimaging, genetic, and phenotypic data, while maintaining interpretability of the extracted biomarkers. Our first framework (G-MIND) leverages a coupled autoencoder-classifier network to project the data modalities to a shared latent space that captures predictive differences between patients and controls. G-MIND uses a learnable dropout layer to extract interpretable biomarkers from the data, and our unique training strategy can easily accommodate missing data modalities across subjects. We demonstrate that G-MIND achieves better predictive performance than conventional imaging-genetics methods, and that the learned representation generalizes across sites. Our second framework (GUIDE) develops a biologically informed deep network for whole-genome analysis. Specifically, the network uses hierarchical graph convolution and pooling operations that mimic the organization of a well-established gene ontology to tracks the convergence of genetic risk across biological pathways. This ontology is coupled with an attention mechanism that automatically identifies the salient edges through the graph. We demonstrate that GUIDE can identify reproducible biomarkers that are closely associated with the deficits of schizophrenia. 

 

Biography:  Archana Venkataraman is a John C. Malone Assistant Professor in the Department of Electrical and Computer Engineering at Johns Hopkins University. She directs the Neural Systems Analysis Laboratory and is a core faculty member of the Malone Center for Engineering in Healthcare. Dr. Venkataraman’s research lies at the intersection of artificial intelligence, network modeling and clinical neuroscience. Her work has yielded novel insights in to debilitating neurological disorders, such as autism, schizophrenia, and epilepsy, with the long-term goal of improving patient care. Dr. Venkataraman completed her B.S., M.Eng. and Ph.D. in Electrical Engineering at MIT in 2006, 2007 and 2012, respectively. She is a recipient of the MIT Provost Presidential Fellowship, the Siebel Scholarship, the National Defense Science and Engineering Graduate Fellowship, the NIH Advanced Multimodal Neuroimaging Training Grant, the CHDI Grant on network models for Huntington's Disease, and the National Science Foundation CAREER award. Dr. Venkataraman was also named by MIT Technology Review as one of 35 Innovators Under 35 in 2019.  

Screen Shot 2021-09-29 at 1.41.53 PM.png
Achuta Kadambi, PhD - Assistant Professor of ECE and CS, UCLA
When: October 15th, 9:45-11:00 am
Title: How do changes in materials affect how images look?

Abstract:  Real world scenes have diverse visual appearance. Such diversity stems from the fundamental physics in how light interacts with matter, across different weather conditions, object types, and even people. These appearance variations mesmerize human beings, but puzzle artificial vision systems, which cannot generalize to such diversity. To overcome this problem, my lab studies the physics of appearance and how we can design artificial vision systems with invariance to physical effects, like weather or skin type. Although we will discuss applications in robotics and autonomous systems, the focus of this talk will be on visual diversity as it applies to medical imagers, including telemedicine and infectious disease, with an emphasis on balancing both performance and fairness (Kadambi, Science 2021). 

Biography: Achuta Kadambi received his PhD from MIT and joined UCLA where he is an Assistant Professor in Electrical Engineering and Computer Science. He teaches computer vision at UCLA (CS.188) and has co-authored a textbook in Computational Imaging, published by MIT Press in 2022. He received early career recognitions from NSF (CAREER), DARPA (Young Faculty Award), Army Research Office (YIP), Forbes (30 under 30), and is also co-founder of a computational imaging company, Akasha Imaging (http://akasha.im).

esf_headshot2.jpg
Emily S. Finn, PhD - Assistant Professor, Department of Psychological and Brain Sciences, Dartmouth College
When: October 8, 9:45-11:00 am
Title: Idiosynchrony: Using naturalistic stimuli to draw out individual differences in brain and behavior

Abstract: While neuroimaging studies typically collapse data across individuals, understanding how brain function varies across people is critical for both basic scientific progress and translational applications. My work has shown that whole-brain functional connectivity patterns serve as a “fingerprint” that can identify individuals and predict trait-level behaviors. Although we can detect these fingerprints while people are resting and performing various traditional cognitive tasks, manipulating brain state using naturalistic paradigms—e.g., movie watching, story listening—can enhance aspects of these patterns that are most relevant to behavior. I will also discuss extensions to the inter-subject correlation (ISC) framework that can model not only shared responses, but also individual variability in neural responses to naturalistic stimuli.

Biography: Dr. Emily S. Finn is an assistant professor in the Department of Psychological and Brain Sciences at Dartmouth College, where she directs the Functional Imaging and Naturalistic Neuroscience (FINN) Lab. Her work focuses on individual variability in brain activity and behavior, especially as it relates to appraisal of ambiguous information under naturalistic conditions. She received a PhD in Neuroscience from Yale, and completed a postdoc at the National Institute of Mental Health.

pbellec.png
Pierre Bellec, PhD - Associate Professor, Department of Psychology, University of Montréal
When: April 16, 9:45-11:00 am
Title: The Courtois NeuroMod project: augmenting learning in artificial networks using human behaviour and brain functional activity

Abstract: The Courtois project on Neural Modelling (https://cneuromod.ca) aims at training artificial neural networks to imitate human behaviour and brain activity, using extensive neuroimaging data. CNeuroMod is collecting and publicly sharing 500 hours of neuroimaging data (fMRI, MEG) per subject, on 6 subjects. I will present quality assessment analyses on the first wave of CNeuroMod data acquisitions, which includes a series of functional localizers, movie watching, as well as video gameplay using "Shinobi 3 - return of the ninja master".

Reference: Bellec, Boyle. Bridging the gap between perception and action: the case for neuroimaging, AI and video games. Preprint https://doi.org/10.31234/osf.io/3epws

Biography: Pierre Bellec, PhD, is the scientific director of the Courtois project on neuronal modelling, the principal investigator of the laboratory for brain simulation and exploration at the Montreal Geriatrics Institute (CRIUGM) and an associate professor at the psychology department of University of Montréal.

PN_photo.jpg
Pascal Notin- PhD Student, Oxford Applied and Theoretical Machine Learning Group, Department of Computer Science, University of Oxford
When: March 19, 9:45-11:00 am
Title: Uncertainty in deep generative models with applications to genomics and drug design

Abstract: 

In this talk I will discuss how combining uncertainty quantification and deep generative modeling helps address key questions in genomics and drug design.
The first part will cover an approach we developed to predict the clinical significance of protein variants in a fully unsupervised manner, directly learning from the natural distribution of proteins in evolutionary data. Our model EVE (Evolutionary model of Variant Effect) not only outperforms computational approaches that rely on labelled data, but also performs on par with high-throughput assays which are increasingly used as strong evidence for variant classification. We predict the pathogenicity of 11 million variants across 1,081 disease genes and assign high-confidence reclassification for 72k Variants of Unknown Significance by combining uncertainty metrics and other sources of evidence. 
The second part will focus on the task of optimizing a back-box objective function over high-dimensional structured spaces (e.g., maximizing drug-likeness of molecules). Optimization in the latent space of deep generative models is a recent and promising approach to do so. However, existing methods in this area lack robustness as they may decide to explore areas of latent space for which no data was available during training. We propose a new approach that quantifies and leverages the epistemic uncertainty of the decoder to guide the optimization process, and show it yields a more effective optimization as it avoids cases in which the decoder generates unrealistic or invalid objects.

Biography: Pascal Notin is a Ph.D. student in the Oxford Applied and Theoretical Machine Learning Group, part of the Computer Science Department at the University of Oxford, under the supervision of Yarin Gal.

His research interests lie at the intersection of Bayesian Deep Learning, Generative models and Computational biology. The current focus of his work is to develop methods to quantify and leverage uncertainty in models for structured representations (e.g., sequences, graphs) with applications in biology and medicine. He has several years of applied machine learning experience developing AI solutions, primarily within the healthcare and pharmaceutical industries (e.g., disease prediction, clinical trials excellence, Real World Evidence analytics). Prior to coming to Oxford, he was a Senior Manager at McKinsey & Company in the New York and Paris offices, where he was leading cross-disciplinary teams on fast-paced analytics engagements. He obtained a M.S. in Operations Research from the IEOR department at Columbia University, and a B.S. and M.S. in Applied Mathematics from Ecole Polytechnique.

jzhou.png
Juan (Helen) Zhou, PhD - Associate Professor, Department of Medicine, National University of Singapore
When: February 26, 9:45-11:00 am
Title: Mapping multimodal brain network changes in neurological disorders: a longitudinal perspective

Abstract: 

The spatial patterning of each neurodegenerative disease relates closely to a distinct structural and functional network in the human brain. This talk will mainly describe how brain network-sensitive neuroimaging methods such as resting-state fMRI and diffusion MRI can shed light on brain network dysfunctions associated with pathology and cognitive decline from preclinical to clinical stage of neurological disorders. I will first present our findings from two independent datasets on how amyloid and cerebrovascular pathology influence brain functional networks cross-sectionally and longitudinally in individuals with mild cognitive impairment and dementia. Evidence on longitudinal functional network organizational changes in healthy older adults and the influence of APOE genotype will be presented. In the second part, I will describe our work on how different pathology influences brain structural network and white matter microstructure. I will also touch on some new data on how individual-level brain network integrity contributes to behavior and disease progression using multivariate or machine learning approaches. These findings underscore the importance of studying selective brain network vulnerability instead of individual region and longitudinal design. Further developed with machine learning approaches, multimodal network-specific imaging signatures will help reveal disease mechanisms and facilitate early detection, prognosis and treatment search of neuropsychiatric disorders.

Biography:

Dr. Juan Helen ZHOU is an Associate Professor at the Center for Sleep and Cognition, and the Deputy Director, Center for Translational MR Research, Yong Loo Lin School of Medicine, National University of Singapore (NUS). She is also affiliated with Duke-NUS Medical School. Her laboratory studies selective brain network-based vulnerability in neuropsychiatric disorders using multimodal neuroimaging and machine learning approaches. She received Bachelor degree and Ph.D. from School of Computer Science and Engineering, Nanyang Technological University, Singapore. Dr. Zhou was an associate research scientist at Department of Child and Adolescent Psychiatry, New York University. She did a post-doctoral fellowship at the Memory and Aging Center, University of California, San Francisco, and in the Computational Biology Program at Singapore-MIT Alliance. Dr. Zhou is currently the Council Member and a previous Program Committee member of the Organization of Human Brain Mapping. She serves as an Editor of multiple journals including Human Brain Mapping, NeuroImage, and Communications Biology.

raghav.png
Raghavendra Selvan, PhD - Assistant Professor, University of Copenhagen
When: February 12, 9:45-11:00 am
Title: Quantum Tensor Networks for Medical Image Analysis

Abstract: Quantum Tensor Networks (QTNs) provide efficient approximations of operations involving high dimensional tensors and have been extensively used in modelling quantum many-body systems and also to compress large neural networks. More recently, supervised learning has been attempted with tensor networks, and have primarily focused on classification of 1D signals and small images. In this talk, we will look at two formulations of QTN-based models for 2D & 3D medical image classification and 2D  medical image segmentation. Both the classification and segmentation models use the matrix product state (MPS) tensor network under the hood, which efficiently learns linear decision rules in high dimensional spaces. These QTN models are fully linear, end-to-end trainable using backpropagation and have lower GPU memory footprint than convolutional neural networks (CNN). We show competitive performance compared to relevant CNN baselines on multiple datasets for classification and segmentation tasks while presenting interesting connections to other existing supervised learning methods. 

This preprint is the most relevant for this talk:

Locally orderless tensor networks for classifying two- and three-dimensional medical images. R Selvan et al. 2020: https://arxiv.org/abs/2009.12280

 

Biography: Raghavendra Selvan (Raghav) is currently an Assistant Professor at the University of Copenhagen, with joint responsibilities at the Machine Learning Section (Dept. of Computer Science), Kiehn Lab (Department of Neuroscience) and the Data Science Laboratory. He received his PhD in Medical Image Analysis (University of Copenhagen, 2018), his MSc degree in Communication Engineering in 2015 (Chalmers University, Sweden) and his Bachelor degree in Electronics and Communication Engineering degree in 2009 (BMS Institute of Technology, India). Raghavendra Selvan was born in Bangalore, India.

His current research interests are broadly pertaining Medical Image Analysis using Quantum Tensor Networks, Bayesian Machine Learning, Graph-neural networks, Approximate Inference and multi-object tracking theory.

Ghassemi_Marzyeh.png
Marzyeh Ghassemi, PhD - Assistant Professor, Computer Science, University of Toronto, Toronto, Canada
When: December 11, 9:45-11:00 am
Title: Don’t Expl-AI-n Yourself: Exploring "Healthy" Models in Machine Learning for Health

Abstract: Despite the importance of human health, we do not fundamentally understand what it means to be healthy. Health is unlike many recent machine learning success stories - e.g., games or driving - because there are no agreed-upon, well-defined objectives. In this talk, Dr. Marzyeh Ghassemi will discuss the role of machine learning in health, argue that the demand for model interpretability is dangerous, and explain why models used in health settings must also be "healthy". She will focus on a progression of work that encompasses prediction, time series analysis, and representation learning.

Biography: Dr. Marzyeh Ghassemi is an Assistant Professor at the University of Toronto in Computer Science and Medicine, and a Vector Institute faculty member holding a Canadian CIFAR AI Chair and Canada Research Chair. She currently serves as a NeurIPS 2019 Workshop Co-Chair, and General Chair for the ACM Conference on Health, Inference and Learning (CHIL). Previously, she was a Visiting Researcher with Alphabet's Verily and a post-doc with Dr. Peter Szolovits at MIT. Prior to her PhD in Computer Science at MIT, Dr. Ghassemi received an MSc. degree in biomedical engineering from Oxford University as a Marshall Scholar, and B.S. degrees in computer science and electrical engineering as a Goldwater Scholar at New Mexico State University. Professor Ghassemi has a well-established academic track record across computer science and clinical venues, including NeurIPS, KDD, AAAI, MLHC, JAMIA, JMIR, JMLR, AMIA-CRI, EMBC, Nature Medicine, Nature Translational Psychiatry, and Critical Care. Her work has been featured in popular press such as MIT News, NVIDIA, Huffington Post. She was also recently named one of MIT Tech Review’s 35 Innovators Under 35.

ptiwari.jpg
Pallavi Tiwari, PhD - Case Western Reserve University, Cleveland, OH
When: December 4, 9:45-11:00 am
Title: Radiomics and Radio-genomics: Opportunities for Precision Medicine

Abstract: In this talk, Dr. Tiwari will focus on her lab’s recent efforts in developing radiomic (extracting computerized sub-visual features from radiologic imaging), radiogenomic (identifying radiologic features associated with molecular phenotypes), and radiopathomic (radiologic features associated with pathologic phenotypes) techniques to capture insights into the underlying tumor biology as observed on non-invasive routine imaging. She will focus on applications of this work for predicting disease outcome, recurrence, progression and response to therapy specifically in the context of brain tumors. She will also discuss current efforts in developing new radiomic features for post-treatment evaluation and predicting response to chemo-radiation treatment. Dr. Tiwari will conclude her talk with a discussion of some of the translational aspects of her work from a clinical perspective.

Biography: Dr. Pallavi Tiwari is an Assistant Professor of Biomedical Engineering and the director of Brain Image Computing Laboratory at Case Western Reserve University. She is also a member of the Case Comprehensive Cancer Center.   Her research interests lie in machine learning, data mining, and image analysis for personalized medicine solutions in oncology and neurological disorders. Her research has so far evolved into over 50 peer-reviewed publications, 50 peer-reviewed abstracts, and 9 patents (3 issued, 6 pending).  Dr. Tiwari has been a recipient of several scientific awards, most notably being named as one of 100 women achievers by Government of India for making a positive impact in the field of Science and Innovation.  In 2018, she was selected as one of Crain’s Business Cleveland Forty under 40.  In 2020, she was awarded the J&J Women in STEM (WiSTEM2D) scholar award in Technology. Her research is funded through the National Cancer Institute, Department of Defense, Johnson & Johnson, V Foundation Translational Award, Dana Foundation, State of Ohio, and the Case Comprehensive Cancer Center.

Bassett_D_2019-127b.jpg
Danielle S. Bassett, PhD - J Peter Skirkanich Professor, Biomedical Engineering, University of Pennsylvania, Philadelphia, PA
When: November 20, 10-11:15 am
Title: Building mental models of our networked world

Abstract: Human learners acquire not only disconnected bits of information, but complex interconnected networks of relational knowledge. The capacity for such learning naturally depends upon three factors: (i) the architecture of the knowledge network itself, (ii) the nature of our perceptive instrument, and (iii) the instantiation of that instrument in biological tissue. In this talk, I will walk through each factor in turn. l will begin by describing recent work assessing network constraints on the learnability of relational knowledge. I will then describe a computational model informed by the free energy principle, which offers an explanation of how such network constraints manifest in human perception. In the third section of the talk, I will describe how neural representations reflect network constraints. Throughout, I'll move from previously published work to unpublished data, and from the world outside to the world inside, before speculating on as-yet uncharted territory. 

Biography: Prof. Bassett is the J. Peter Skirkanich Professor at the University of Pennsylvania, with appointments in the Departments of Bioengineering, Electrical & Systems Engineering, Physics & Astronomy, Neurology, and Psychiatry. Bassett is also an external professor of the Santa Fe Institute. Bassett is most well-known for blending neural and systems engineering to identify fundamental mechanisms of cognition and disease in human brain networks. Bassett is currently writing a book for MIT Press entitled Curious Minds, with co-author Perry Zurn Professor of Philosophy at American University. Bassett received a B.S. in physics from Penn State University and a Ph.D. in physics from the University of Cambridge, UK as a Churchill Scholar, and as an NIH Health Sciences Scholar. Following a postdoctoral position at UC Santa Barbara, Bassett was a Junior Research Fellow at the Sage Center for the Study of the Mind. Bassett has received multiple prestigious awards, including American Psychological Association's ‘Rising Star’ (2012), Alfred P Sloan Research Fellow (2014), MacArthur Fellow Genius Grant (2014), Early Academic Achievement Award from the IEEE Engineering in Medicine and Biology Society (2015), Harvard Higher Education Leader (2015), Office of Naval Research Young Investigator (2015), National Science Foundation CAREER (2016), Popular Science Brilliant 10 (2016), Lagrange Prize in Complex Systems Science (2017), Erdos-Renyi Prize in Network Science (2018), OHBM Young Investigator Award (2020), AIMBE College of Fellows (2020). Bassett is the author of more than 300 peer-reviewed publications, which have garnered over 24,000 citations, as well as numerous book chapters and teaching materials. Bassett is the founding director of the Penn Network Visualization Program, a combined undergraduate art internship and K-12 outreach program bridging network science and the visual arts. Bassett’s work has been supported by the National Science Foundation, the National Institutes of Health, the Army Research Office, the Army Research Laboratory, the Office of Naval Research, the Department of Defense, the Alfred P Sloan Foundation, the John D and Catherine T MacArthur Foundation, the Paul Allen Foundation, the ISI Foundation, and the Center for Curiosity.

RA_mlim.jpg
Rediet Abebe, PhD - Assistant Professor, Computer Science, University of California, Berkley, Berkley, CA
When: November 13, 9:45-11:00 am
Title: Data as Inequality: A Maternal Mortality Case Study

Abstract: While most mortality rates have decreased in the US, maternal mortality has increased and is among the highest of any OECD nation. Extensive public health research is ongoing to better understand the characteristics of communities with relatively high or low rates. In this talk, we explore the role that social media language can play in providing insights into such community characteristics. Analyzing pregnancy-related tweets generated in US counties, we reveal a diverse set of latent topics including Morning Sickness, Celebrity Pregnancies, and Abortion Rights. We find that rates of mentioning these topics on Twitter predicts maternal mortality rates with higher accuracy than standard socioeconomic and risk variables such as income, race, and access to health-care, holding even after reducing the analysis to six topics chosen for their interpretability and connections to known risk factors. We then investigate psychological dimensions of community language, finding the use of less trustful, more stressed, and more negative affective language is significantly associated with higher mortality rates, while trust and negative affect also explain a significant portion of racial disparities in maternal mortality. We discuss the potential for these insights to inform actionable health interventions at the community-level. 

Biography: Rediet Abebe is a Junior Fellow at the Harvard Society of Fellows and an incoming Assistant Professor of Computer Science at the University of California, Berkeley. Abebe holds a Ph.D. in computer science from Cornell University as well as graduate degrees from Harvard University and the University of Cambridge. Her research is in the fields of artificial intelligence and algorithms, with a focus on equity and justice concerns. She co-founded and co-organizes Mechanism Design for Social Good (MD4SG), a multi-institutional, interdisciplinary research initiative working to improve access to opportunity for historically disadvantaged communities. Abebe's research has informed policy and practice at the National Institute of Health (NIH) and the Ethiopian Ministry of Education. Abebe has been honored in the MIT Technology Reviews' 35 Innovators Under 35, ELLE, and the Bloomberg 50 list as a "one to watch." She has presented her research in venues including National Academy of Sciences, the United Nations, and the Museum of Modern Art. Abebe co-founded Black in AI, a non-profit organization tackling representation and inclusion issues in AI. Her research is deeply influenced by her upbringing in her hometown of Addis Ababa, Ethiopia.

bcaffo.jpg
Brian Caffo, PhD - Professor, Department of Biostatistics, Johns Hopkins University, Baltimore, MD
When: October 16, 9:45-11:00 am
Title: Covariance regression for connectome outcomes

Abstract: In this talk, we cover methodology for jointly analyzing a collection of covariance or correlation matrices that depend on other variables. This covariance-as-an-outcome regression problem arises commonly in the study of brain imaging, where the covariance matrix in question is an estimate of functional or structural connectivity. Two main approaches to covariance regression exists: outer product models and joint diagonalization approaches. We investigate joint diagonalization approaches and discuss the benefits and costs of this solution. We distinguish between diagonalization approaches where the eigenvectors are selected in the absence of covariate information and those that chose the eigenvectors so that the result regression model holds best. The methods are applied to resting state functional
magnetic resonance imaging data in a study of aphasia and potential interventions.

Biography: Brian Caffo is a Professor in the Department of Biostatistics at the Johns Hopkins Bloomberg School of Public Health. Dr. Caffo is a leading expert in statistics and biostatistics and is the recipient of the PECASE award, the highest honor given by the US Government for early career scientists and engineers. Along with Roger Peng and Jeff Leek, Dr. Caffo created the Data Science Specialization on Coursera. He leads the Johns Hopkins Data Science Lab (DaSL), a group based in the Johns Hopkins Bloomberg School of Public Health whose mission is to enhance data science thinking everywhere and make data science accessible to the entire world. The DsSL believe all people should be able to develop literacy, fluency and skill in data science so they can make sense of the data they encounter in their personal and professional lives. They recognize data science as a fundamentally human activity and focus our activities on helping people build data analyses for people.

Their goal is to

  • Teach people how to design, collect, interpret, and interact with data

  • Build a supportive environment for the people at Johns Hopkins who creatively use data to answer questions

  • Provide leadership on how people doing data science should be supported at Johns Hopkins and in academia, industry, and government

  • Build resources and products that help people learn and do data science

  • Conduct research into the theory and practice of data science

They have previously built massive online open courses in data science that have enrolled more than 8 million people around the world, published best selling books, widely-subscribed blogs, developed podcasts on data science, statistics, and academia, and have developed a software platform for interactive learning of statistics in R. We make our impact by combining cutting edge research in machine learning, artificial intelligence and statistics with a deep understanding of applications and an eye toward the human behavioral component of data analysis.

Davatzikos.png
Christos Davatzikos, PhD - Wallace T. Miller Sr. Professor of Radiology, Director of AI in Biomedical Imaging Laboratory and Director of Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA
When: October 2, 9:45-11:00 am
Title: Disentangling neurobiological heterogeneity, via semi-supervised clustering of imaging signatures: applications to neuropsychiatric and neurodegenerative diseases

Abstract: Machine learning has shown great promise in neuroimaging, over the past 15 years. This talk will focus on work taking place at Penn’s AIBIL laboratory, aiming to improve our understanding of neurobiological heterogeneity of neuropsychiatric and neurodegenerative diseases, via semi-supervised learning methods. A generative (CHIMERA) and a discriminative (HYDRA) approach are discussed, as well as their application to neuroimaging data revealing subtypes of schizophrenia and MCI/AD. Current work on extensions using multi-scale orthogonally-projective NMF, as a means for feature learning in conjunction with HYDRA, as well as a deep-learning approach utilizing GANs, is also presented. These methods are applied to large consortia of aging and schizophrenia, collectively including over 40,000 MRI scans.

Biography: Christos Davatzikos is the Wallace T. Miller Sr. Professor of Radiology, with secondary appointment in Electrical and Systems Engineering and joint appointments with the Bioengineering and Applied Math graduate groups at Penn. He received his undergraduate degree by the National Technical University of Athens, Greece, in 1989, and Ph.D. from Johns Hopkins University, in 1994. He joined the faculty at the Johns Hopkins School of Medicine as Assistant Professor (1995) and later Associate Professor (2001) of Radiology. In 2002 he moved to Penn to direct the Section for Biomedical Image Analysis, and in 2013 he established the Center for Biomedical Image Computing and Analytics. His interests are in the field of imaging informatics. In the past 15 years he has focused on the application of machine learning and pattern analysis methods to medical imaging problems, including the fields of computational neuroscience and computational neuro-oncology. He has worked on aging, Alzheimer's Disease, schizophrenia, brain development, and brain cancer. Dr. Davatzikos is an IEEE and AIMBE  Fellow, a Distinguished Investigator at the Academy of Radiology Research in the USA, and member of various editorial boards.

jdb_edited.jpg
Jonathan Rosenblatt, PhD- Senior Lecturer (Asst. Prof), Department of Industrial Engineering and Management, Ben Gurion University, Negev, Israel
When: August 7th, 9:45-11:00 am

Title: On the low power of predictive accuracy for signal detection

Abstract:  The estimated accuracy of a supervised classifier is a random quantity with variability. A common practice in supervised machine learning, is thus to test if the estimated accuracy is significantly better than chance level. This method of signal detection is particularly popular in neuroimaging and genetics. We provide evidence that using a classifier's accuracy as a test statistic can be an underpowered strategy for finding differences between populations, compared to a bona-fide statistical test. It is also computationally more demanding. We compare test statistics that are based on classification accuracy, to others based on multivariate test statistics. We find the probability of detecting differences between two distributions is lower for accuracy-based statistics. We examine several candidate causes for the low power of accuracy tests. These causes include: the discrete nature of the accuracy test statistic, the type of signal accuracy tests are designed to detect, their inefficient use of the data, and their regularization. When the purpose of the analysis is not signal detection, but rather, the evaluation of a particular classifier, we suggest several improvements to increase power. In particular, to replace V-fold cross validation with the Leave-One-Out Bootstrap.

Biography: Jonathan D. Rosenblatt Is a Senior Lecturer (Assistant Professor) in the Dept. of Industrial Engineering and Management, Ben Gurion University of the Negev, Israel. He is a statistician, working on wide range of topics and applications including distributed algorithms for machine-learning, statistical methods for medical imaging, statistical theory, high-dimensional process control, and more. 

Max Welling 08-small.JPG
Max Welling, PhD- Professor of Computer Science, Institute of Informatics, University of Amsterdam, Netherlands
When: July 24, 9:45-11:00 am
Title: Graph Nets: The Next Generation

Abstract: In this talk I will introduce the next generation of graph neural networks known as Natural Graph Networks and Mesh CNNs. GNNs have the property that they are invariant to permutations of the nodes in the graph. This turns out to be unnecessarily limited. In this work we develop new models that are more flexible in the sense that they do not have isotropic kernels but at the same time remain highly scalable. The Mesh-CNNs are developed to run messages over a graph (mesh) that represents a discretization of a (curved) surface. The Natural Graph Networks are designed to be more flexible convolutions on general graphs. Joint with Pim de Haan, Maurice Weiler, and Taco Cohen.

Biography: Prof. Dr. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a VP Technologies at Qualcomm. He has a secondary appointment as a fellow at the Canadian Institute for Advanced Research (CIFAR). Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015. He serves on the board of the Neurips foundation since 2015 and has been program chair and general chair of Neurips in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair of MIDL 2018. He is a founding board member of ELLIS. Max Welling is recipient of the ECCV Koenderink Prize in 2010. He directs the Amsterdam Machine Learning Lab (AMLAB), and co-directs the Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA). He has over 300 publications in machine learning and an h-index of 66.

alex_lavin_sq.jpg
Alexander Lavin- Chief Scientific Officer, Augustus Intelligence, NY, NY, Founder, Latent Sciences, Cambridge, MA
When: July 17th, 9:45-11:00 am
Title: Machine learning research to production in medicine

Abstract: Taking machine learning (ML) models and algorithms from R&D to production is often non-trivial, and exponentially so in medical applications: real-world patient data is typically ill-prepared for ML, deployment settings vary in often subtle ways that affect data distributions and thus model performance, model interpretability and explainability at several levels of abstraction are needed for usability and trust, principled uncertainty reasoning is critical for confidence in practice, and more. The tasks and datasets in ML research rarely reflect the real-world objectives and constraints. In this talk I discuss the misalignment issue with ML research and applications in medicine, and specifically prescribe ways to advance medical ML R&D to real-world deployment. I elucidate this with several examples: developing a state-of-the-art neurodegenerative prediction algorithm towards a personalized medicine application, and a novel unsupervised computer vision method for use in neuropathology.

Biography: Alexander Lavin is an AI researcher and software engineer, specializing in Bayesian machine learning and probabilistic computation. Lavin is Chief Scientific Officer at stealth Augustus Intelligence, building state-of-art "augmented intelligence" for massive real-world challenges. Lavin is also founder of Latent Sciences, a startup commercializing his patented AI platform for predictive disease modeling; the flagship application is presymptomatic prediction of neurodegeneration. Before Augustus and Latent, he was a Senior Research Engineer at both Vicarious and Numenta, building artificial general intelligence for robotics, and developing biologically-derived AI & ML algorithms, respectively. He was previously a spacecraft engineer, and now is an AI Advisor for NASA FDL. Lavin was a Forbes 30 Under 30 honoree in Science, advises several deep tech startups (from next-gen computation to medical devices), and has published in top journals and conferences across AI/ML and neuroscience. In his free time, Alexander enjoys running, yoga, live music, and reading sci-fi and theoretical physics books.

1529337693_OlafSporns_300x375_200x250.jp
Olaf Sporns, PhD- Professor and Chair, Department of Psychological and Brain Sciences, Indiana University, Bloomington IN
When: July 10th, 9:45 - 11:00 am
Title: Connectivity and Dynamics of Complex Brain Networks

Abstract: Networks (connectivity) and dynamics are two key pillars of network neuroscience – an emerging field dedicated to understanding structure and function of neural systems across scales, from neurons to circuits to the whole brain. In this presentation I will review current themes and future directions, including structure/function relationships, use of computational models to map information flow and communication dynamics, and a novel edge-centric approach to functional connectivity. I will argue that network neuroscience represents a promising theoretical framework for understanding the complex structure and functioning of nervous systems.

Biography: After receiving an undergraduate degree in biochemistry, Olaf Sporns earned a PhD in Neuroscience at Rockefeller University and then conducted postdoctoral work at The Neurosciences Institute in New York and San Diego. Currently he is the Robert H. Shaffer Chair, a Distinguished Professor, and a Provost Professor in the Department of Psychological and Brain Sciences at Indiana University in Bloomington. He is co-director of the Indiana University Network Science Institute and holds adjunct appointments in the School of Informatics and Computing and the School of Medicine. His main research area is theoretical and computational neuroscience, with a focus on complex brain networks. In addition to over 200 peer-reviewed publications he is the author of two books, “Networks of the Brain” and “Discovering the Human Connectome”. He is the Founding Editor of “Network Neuroscience”, a journal published by MIT Press. Sporns was awarded a John Simon Guggenheim Memorial Fellowship in 2011, was elected Fellow of the American Association for the Advancement of Science in 2013, and received the Patrick Suppes Prize in Psychology/Neuroscience, awarded by the American Philosophical Society in 2017.

OmerInan.png
Omer T Inan, PhD- Associate Professor, Electrical and Computer Engineering, Georgia Tech, Atlanta, GA
When: June 12, 10:15-11:30 am
Title: Non-Invasive Physiological Sensing and Modulation for Human Health and Performance

Abstract: Recent advances in digital health technologies are enabling biomedical researchers to reframe health optimization and disease treatment in a patient-specific, personalized manner. Rather than a one-size-fits-all paradigm, the charge is for a particular profile to be fit to each patient, and for disease treatment (or wellness) strategies to then be tailored accordingly—perhaps even with fully closed-loop systems based on neuromodulation. Non-invasive physiological sensing and modulation can play an important role in this effort by augmenting existing research in ‑omics and medical imaging towards better developing such personalized models and phenotypic assays for patients, and in continuously adjusting such models to optimize therapies in real-time to meet patients’ changing needs. While in many instances the focus of such efforts is on disease treatment, optimizing performance for healthy individuals is also a compelling need. This talk will focus on my group’s research on non-invasive sensing of the sounds and vibrations of the body, with application to musculoskeletal and cardiovascular monitoring applications. In the first half of the talk, I will discuss our studies that are elucidating mechanisms behind the sounds of the knees, and particularly the characteristics of such sounds that change with acute injuries and arthritis. We use miniature microelectromechanical systems (MEMS) air-based and piezoelectric contact microphones to capture joint sounds emitted during movement, then apply data analytics techniques to both visualize and quantify differences between healthy and affected knees. In the second half of the talk, I will describe our work studying the vibrations of the body in response to the heartbeat using wearable MEMS accelerometers, and how this sensing fits within a non-invasive neuromodulation ecosystem for treating post-traumatic stress disorder. Our group has extensively studied the timings of such vibrations in relation to the electrophysiology of the heart, and how such timings change for patients with cardiovascular diseases during treatment. Ultimately, we envision that these technologies can enable personalized titration of care and optimization of performance to reduce injuries and rehabilitation time for athletes and soldiers, improve the quality of life for patients with heart disease, and reduce overall healthcare costs.

Biography: Omer Inan is an Associate Professor of Electrical and Computer Engineering and Adjunct Associate Professor of Biomedical Engineering at Georgia Tech. He received his BS, MS, and PhD in Electrical Engineering from Stanford in 2004, 2005, and 2009, respectively. From 2009-2013, he was the Chief Engineer at Countryman Associates, Inc., a professional audio manufacturer of miniature microphones and high-end audio products for Broadway theaters, theme parks, and broadcast networks. He has received several major awards for his research including the NSF CAREER award, the ONR Young Investigator award, and the IEEE Sensors Council Early Career award. While at Stanford as an undergraduate, he was the school record holder and a three-time NCAA All-American in the discus throw.

rajeshranganath.jpg
Rajesh Ranganath, PhD- Assistant Professor, Courant Institute of Mathematical Sciences and the Center for Data Science, New York University, New York, NY
When: May 29, 9:45-11 am
Title: Checking AI via Testing and its Application to COVID-19

Abstract: AI powers predictive models of medical data by uncovering subtle relationships between the input to the model and its output. However, the power of AI models means they can pick up on spurious relationships in data. Methods to surface these relationships can make AI models more robust. In the first part of this talk, I will show how powerful probabilistic models can be used to build hypothesis tests that identify important relationships in data. Along the way, I will discuss techniques for highlighting important pieces of the input for a particular observation. In the second part of the talk, I will discuss how these approaches have been used to support the development of an adverse event model for hospitalized COVID-19 patients and a visualization of this model for clinicians at the bedside.

Biography: Rajesh Ranganath is an assistant professor at NYU's Courant Institute of Mathematical Sciences and the Center for Data Science. He is also affiliate faculty at the Department of Population Health at NYUMC. His research focuses on approximate inference, causal inference, Bayesian nonparametrics, and machine learning for healthcare. Rajesh completed his PhD at Princeton and BS and MS from Stanford University. Rajesh has won several awards and fellowships including the NDSEG graduate fellowship, the Porter Ogden Jacobus Fellowship, given to the top four doctoral students at Princeton University, and the Savage Award in Theory and Methods.

julia-schnabel.x91bd0f89.png
Julia Schnabel, PhD- Professor and Chair, Computational Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, UK
When: May 22, 9:45-11 am
Title: Deep learning for smart medical imaging

Abstract: Deep learning approaches in medical imaging have shown great promise in the areas of detection, segmentation and disease classification and are now moving into more complex topics such as motion correction and shape modelling. However, their success is limited by the availability and quality of the images in the dataset used for training these algorithms. A common approach is to train deep learning methods on a well annotated and curated database of high-quality image acquisitions, which then may fail on real patient cases in a hospital setting. In this talk I will show some of our recent deep learning approaches that aim to overcome some of these challenges, by applying novel methods for image augmentation and image compounding. To illustrate some of these approaches, I will draw from examples in cardiac magnetic resonance imaging and fetal ultrasound imaging.

Biography: Julia Schnabel is Professor of Computational Imaging at the School of Biomedical Engineering and Imaging Sciences, King’s College London. She joined King’s in 2015 from the University of Oxford, where she was Professor of Engineering Science. She previously held postdoc positions at University College London, King’s College London and University Medical Center Utrecht. Her research is focusing on machine/deep learning, nonlinear motion modelling, as well as multi-modality, dynamic and quantitative imaging for a range of medical imaging modalities and applications. She is the Director of the Centre for Doctoral Training in Smart Medical Imaging at King’s and Imperial College London, a Director of the Medical Imaging Summer School (MISS), has been Program Chair of MICCAI 2018, General Chair of WBIR 2016, and will be General Co-Chair of IPMI 2021. She is an Associate Editor of IEEE Transactions on Medical Imaging and Transactions on Biomedical Engineering, is on the Editorial Board of Medical Image Analysis, and is an Executive Editor of the new Journal of Machine Learning for Biomedical Imaging (melba-journal.org). She serves on the IEEE EMBS AdCom, the MICCAI Society Board, and has been elected Fellow of the MICCAI Society and Fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS).

barua.png
Souptik Barua, PhD- Postdoctoral Research Fellow, Scalable Health Lab, Rice University, Houtson, TX
When: May 8th, 9:45 - 11:00 am
NYC location: Virtual only
Ithaca Location:Virtual only
Title: Leveraging structure in cancer imaging to predict clinical outcomes

Abstract: In this talk, I present data-driven frameworks that leverage different types of structure in cancer imaging data to predict clinical outcomes of interest. I demonstrate my findings using two kinds of cancer image data: multiplexed Immuno-Fluorescent (mIF) images from the field of pathology, and Computed Tomography (CT) from radiology. In mIF images, I show that spatial structure based on cell proximities can be used as a visual signature of immune infiltration. Further, the spatial proximity of certain cell types is independently associated with clinical outcomes such as overall survival and risk of progression in pancreatic and lung cancer. In CT images acquired at multiple time points, I demonstrate that the temporal evolution of image features can be used to predict clinical outcomes such as the likelihood of complete response to radiation therapy and the risk of developing long-term radiation injuries such as osteoradionecrosis. Towards the end of my talk, I will present some new research directions in leveraging structure from sensor data in diabetes and pediatric arrhythmias.

Biography: Souptik Barua is a postdoctoral research associate in the Electrical and Computer Engineering department at Rice University. As part of the Scalable Health labs at Rice, Souptik’s research draws on ideas from machine learning, computer vision, and statistics, to discover clinically meaningful information from sensor data. His current focus is on discovering computational biomarkers in diabetes, cancer, and cardiac arrhythmias.
Souptik obtained his Bachelors in Electrical Engineering (B.Tech) from the Indian Institute of Technology, Kharagpur, India in 2012. He received his M.S and Ph.D. in Electrical Engineering from Rice University in 2015 and 2019 respectively. Souptik was one of 13 final-year Ph.D. students invited to the inaugural EPFL Ph.D. summit at Lausanne, Switzerland. He is also a current recipient of a $25k Innovation Seed grant from the NSF as part of the PATHS-UP program.

bagci18.jpg
Ulas Bagci, PhD - Principal Investigator and Assistant Professor, Center for Research in Computer Vision, University of Central Florida, Orlando, FL
When: March 5th, 3:15 - 4:30 pm
NYC location: Belfer (413 E69 St), BB 204-C
Ithaca Location: Weill Hall 226
Title: A Collaborative Computer Aided Diagnosis (C-CAD) System with Eye-Tracking, Sparse Attentional Model, and Deep Learning

Abstract: Vision researchers have been analyzing behaviors of radiologists during screening to understand how and why they miss tumors or misdiagnose. In this regard, eye-trackers have been instrumental in understanding visual search processes of radiologists. However, most relevant studies in this aspect are not compatible with realistic radiology reading rooms. In this talk, I will share our unique experience for developing a paradigm shifting computer aided diagnosis (CAD) system, called collaborative CAD (C-CAD), that unifies CAD and eye-tracking systems in realistic radiology room settings. In other words, we are creating artificial intelligence (AI) tools that get benefits from human cognition and improve over complementary powers of AI and human intelligence. We first developed an eye-tracking interface providing radiologists with a real radiology reading room experience. Second, we proposed a novel computer algorithm that unifies eye-tracking data and a CAD system. The proposed C-CAD collaborates with radiologists via eye-tracking technology and helps them to improve their diagnostic decisions. The proposed C-CAD system has been tested in a lung and prostate cancer screening experiment with multiple radiologists. More recently, we also experimented brain tumor segmentation with the proposed technology leading to promising results. In the last part of my talk, I will describe how to develop AI algorithms which are trusted by clinicians, namely “explainable AI algorithms". By embedding explainability into black box nature of deep learning algorithms, it will be possible to deploy AI tools into clinical workflow, and leading into more intelligent and less artificial algorithms available in radiology rooms.

Biography: Dr. Bagci is a faculty member at the Center for Research in Computer Vision (CRCV), His research interests are artificial intelligence, machine learning and their applications in biomedical and clinical imaging. Previously, he was a staff scientist and the lab co-manager at the NIH's Center for Infectious Disease Imaging (CIDI) Lab, department of Radiology and Imaging Sciences (RAD&IS). Dr. Bagci had also been the leading scientist (image analyst) in biosafety/bioterrorism project initiated jointly by NIAID and IRF. Dr. Bagci obtained his PhD degree from Computer Science, University of Nottingham (UK) in collaboration with University of Pennsylvania. Dr. Bagci is senior member of IEEE and RSNA, and member of scientific organizations such as SNMMI, ASA, RSS, AAAS, and MICCAI. Dr. Bagci is the recipient of many awards including NIH's FARE award (twice), RSNA Merit Awards (5+ times), best paper awards, poster prizes, and several highlights in journal covers, media, and news. Dr. Bagci was co-chair of Image Processing Track of SPIE Medical Imaging Conference, 2017, and technical committee member of MICCAI for several years.

ben_square-150x150.jpg
Ben Glocker, PhD - Reader in Machine Learning for Imaging, Faculty of Engineering, Department of Computing, Imperial College London, London UK
When: February 14th, 9:45 - 11:00 am
NYC location: Belfer Building (413 E69), BB 204-C
Ithaca Location: Phillips Hall, 233
Title: Causality Matters in Medical Imaging

Abstract: We use causal reasoning to shed new light on key challenges in medical imaging: 1) data scarcity, which is the limited availability of high-quality annotations, and 2) data mismatch, whereby a trained algorithm may fail to generalize in clinical practice. We argue that causal relationships between images, annotations, and data-collection processes can not only have profound effects on the performance of predictive models, but may even dictate which learning strategies should be considered in the first place. Semi-supervision, for example, may be unsuitable for image segmentation - one of the possibly surprising insights from our causal considerations in medical image analysis. We also discuss two approaches for tackling the problem of domain (or acquisition) shift. We conclude that it is important for the success of machine-learning-based image analysis that researchers are aware of and account for the causal relationships underlying their data.

Biography: Dr. Ben Glocker is Reader (eq. Associate Professor) in Machine Learning for Imaging at Imperial College London. He holds a PhD from TU Munich and was a post-doc at Microsoft and a Research Fellow at the University of Cambridge. His research is at the intersection of medical image analysis and artificial intelligence aiming to build computational tools for improving image-based detection and diagnosis of disease.

konrad_profile_3.jpg
Konrad Kording, PhD - Professor, Department of Bioengineering and Neuroscience, University of Pennsylvania, Philadelphia, PA
When: January 23rd, 3:15 - 4:30 pm
NYC location: ST8A-05 (Starr building, floor 8A)
Ithaca Location: Weill Hall 224
Title: Is most of medical machine learning wrong or misleading?

Abstract: The promise to convert large datasets into medical insights is driving the transition of medicine towards a data rich discipline. Consequently, many scientists focus on machine learning from such datasets. Countless papers are exciting, but very little has clinical impact. Here I argue that this is due to the way we do machine learning, and how common practices lead to non-replication or misleading interpretations of machine learning results. I will discuss ways of minimizing such problems.

Biography: Dr. Kording's (He/Him) is trying to understand how the world and in particular the brain works using data. Early research in the Kording lab focused on computational neuroscience and in particular movement. But as the approaches matured, the focus has more been on discovering ways in which new data sources as well as emerging data analysis can enable awesome possibilities. The current focus is on Causality in Data science applications - how do we know how things work if we can not randomize? But we are also very much excited about understanding how the brain does credit assignment. The kording lab style of working is transdisciplinary, we collaborate on virtually every project.

joaquin-goni.jpg
Joaquin Goni, PhD - Assistant Professor, School of Industrial Engineering & Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN
When: December 6th, 9:45 - 11:00 am
NYC location: Belfer Building, 204-A
Ithaca Location: Phillips Hall 233
Title: Brain connectomics: from maximizing subjects identifiability to disentangling heritable and environment traits

Abstract: In the 17th century, physician Marcello Malpighi observed the existence of patterns of ridges and sweat glands on fingertips. This was a major breakthrough and originated a long and continuing quest for ways to uniquely identify individuals based on fingerprints. In the modern era, the concept of fingerprinting has expanded to other sources of data, such as voice recognition and retinal scans. It is only in the last few years that technologies and methodologies have achieved high-quality data for individual human brain imaging, and the subsequent estimation of structural and functional connectivity. In this context, the next challenge for human identifiability is posed on brain data, particularly on brain networks, both structural and functional.

Here I present how the individual fingerprint of a human structural or functional connectome (as represented by a network) can be maximized from a reconstruction procedure based on group-wise decomposition in a finite number of orthogonal brain connectivity modes. By using data from the Human Connectome Project and from a local cohort, I also introduce different extensions of this work, including an extended version of the framework for inter-scanner identifiability, evaluating identifiability on graph theoretical measurements and an ongoing extended version of the framework towards disentangling heritability and environmental brain network traits.

Biography: I am a Computational Neuroscientist who works in the emergent research area of Brain Connectomics. I am the head of the CONNplexity Lab, which focuses on the application of Complex Systems approaches in Neuroscience and Cognitive Science, including frameworks such as graph theory, information theory or fractal theory. Projects include relating structural and functional connectivity within the human brain. My interest includes healthy and disease conditions, including neurodegenerative diseases. I also make contributions to theoretical foundations of Complex Systems.

I earned my degree in Computer Engineering in 2003 (University of the Basque Country) and my Ph.D in 2008 from the Department of Physics and Applied Mathematics (University of Navarra). After a first postdoc at a Functional Neuroimaging Lab at University of Navarra, from 2011 to 2014 I was a postdoctoral researcher at the group of Dr. Olaf Sporns at Indiana University. In 2015, I joined Purdue University as an Assistant Professor.

BratislavMisic.jpg
Bratislav Misic, PhD - Assistant Professor, Neurology and Neurosurgery, McGill University, Montreal, CA
When: November 15th, 9:45 - 11:00 am
NYC location: Belfer Building (413 E69 St), 302-D
Ithaca Location: Weill Hall, 224
Title: Signaling and transport in brain networks

Abstract: The complex network spanned by millions of axons and synaptic contacts acts as a conduit for both healthy brain function and for dysfunction. Collective signaling and communication among populations of neurons supports flexible behaviour and cognitive operations. Perturbations, such as stimulation-induced dynamic activity or the accumulation of pathogenic proteins, often spread from their source location via axonal projections. Here I will focus on how two fundamental types of dynamics - electrical signaling and molecular transport - can be modeled in brain networks.

Biography: Dr. Bratislav Misic leads the Network Neuroscience Lab. We investigate how cognitive operations and complex behaviour emerge from the connections and interactions among brain areas. The goal of this research is to quantify the effects of disease on brain structure and function. Our research program emphasizes representations and models that not only embody the topological organization of the brain, but also capture the complex multi-scale relationships that link brain network topology to dynamic biological processes, such as neural signalling and disease spread. Our research lies at the intersection of network science, dynamical systems and multivariate statistics, with a focus on complex data sets involving multiple neuroimaging modalities, including fMRI, DWI, MEG/EEG and PET.

bottom of page