Invited Speakers
- English
- Türkçe
Jeff Cohn University of Pittsburgh Dr. Jeffrey Cohn is professor emeritus of Psychology and Intelligent Systems at the University of Pittsburgh, courtesy faculty at the Robotics Institute of Carnegie Mellon University, and chief scientist and co-founder at Deliberate AI. He has led inter-disciplinary efforts to develop advanced methods of automatic analysis and synthesis of facial expression, body movement, and prosody and applied those tools to research in human emotion, nonverbal communication, and computational psychiatry. His research has been supported by the U.S. National Institutes of Health, National Science Foundation, and other agencies in the U.S., Canada, and Australia. Objective Measurement and Analysis of Internalizing Disorders Using Multimodal Machine Learning for Clinical Science and TreatmentTo reveal mechanisms in psychopathology and gauge treatment response, reliable, valid, efficient measurement is critical. Self-report and clinical interview, current state of the art in diagnosis and clinical trial end-point measurement, assess severity but are subjective, difficult to standardize within and across settings, impose patient burden, and lack granularity. Multimodal machine learning presents an increasingly powerful alternative to these standard approaches. With emphasis on audio-visual communication in depression and OCD, I present my interdisciplinary teams’ efforts in developing and applying objective, reliable, valid, efficient, and interpretable multimodal measures of disorder, neural activity, and response to treatment in children and adults. |
|
Metehan Çiçek Ankara University Dr. Metehan Çiçek is Professor of Physiology and Neuroscience at Ankara University. He is the director of the Cognitive Neuroimaging Lab at Neuroscience and Neurotechnology Center of Excellence (NÖROM) (https://norom.gazi.edu.tr/). His research has focused on number, time, spatial perception and the effect of reward and emotions on these functions. He has been using fMRI, DTI as well as behavioral measures as research methods. He has got scientific support from TÜBİTAK and TÜSEB. He recently received support from the Strategy and Budget Directorate (among other colleagues) to establish a neuroscience center (NÖROM) incorporating a state-of-the-art neuroimaging infrastructure. Social Stress and Time Perception Interaction Assessed with Neuroimaging and Epigenetic ApproachesEverybody knows that emotions affect our perception of time. On the other hand, inter-individual differences are quite remarkable in terms of emotion and timing interaction. We used an ecological valid model “social stress” to assess the neurological underpinnings of emotion and timing interaction. I will present our teams work showing how stress changes our perception of time and which neural pathways are involved using neuroimaging. The talk will also cover how epigenetic variations of the people affect time perception. |
|
Tilbe Göksun Koç University Tilbe Göksun is a Professor of Psychology at Koç University and the director of the Language & Cognition Lab (https://lclab.ku.edu.tr/). Her primary research involves language-thought interactions across developmental time, early language learning, and multimodal language processing and production in different populations. Her work employs inter- and multidisciplinary perspectives, focusing on multi-method and cross-linguistic research with multilevel analyses. She received many national and international awards (e.g., James S. McDonnell Human Cognition Scholar Award, TÜBİTAK Encouragement Award, Turkish Academy of Sciences Outstanding Young Scientist Award, and The Science Academy, Turkey Young Scientist Award). She serves on the Cognitive Science Society Governing Board. Multifunctionality of gesture use and processing: An individual differences approachLanguage is multimodal. One aspect of multimodality is hand gestures people produce when they communicate and represent their thoughts. These gestures also reflect and can even change individuals’ thinking processes. However, individuals vary how much information they receive from gestures and how they use their gestures. Why do people gesture in a wide variety of situations? Do gestures serve similar functions in all these instances? How do individuals’ use of other cognitive resources interact with their gesture use and processing across different contexts? This talk will address multifunctionality of gesture processing and production across populations and contexts, with an emphasis of gesture’s role in compensating verbal and visuo-spatial cognitive resources. |