Case Western Reserve University
Kutay Tasci

Kutay Tasci

PhD Student

Teaching Information

High-performance computing, parallel and distributed systems, machine learning systems, graph neural networks, and large-scale AI.

Education

  • Ph.D. in Computer and Data Sciences, Case Western Reserve University, 2024–2028 (expected)
  • M.S. in Computer Engineering, Bilkent University, 2021–2024
  • B.S. in Computer Engineering, Hacettepe University, 2017–2021

Awards and Honors

  • Swanger Graduate Fellowship, 2024
  • 5G and Beyond Joint Graduate Support Program (ICTA), 2022
  • Bilkent University Comprehensive Scholarship, 2021
  • METU IEEE Hackathon – 1st Place, 2020

Current Projects

Efficient Geometric Graph Neural Networks

My thesis work focuses on improving the efficiency and scalability of geometric graph neural networks (GNNs) for scientific machine learning applications. This research studies the computational and systems challenges of geometric message passing, with emphasis on designing methods that reduce redundant computation, improve memory efficiency, and better utilize modern GPU architectures. The broader goal is to enable faster and more scalable training and inference for geometric deep learning models used in domains such as molecular modeling, materials science, and physical simulation.

Related Publications:

Sparse Attention and Long-Context Transformer Optimization

This research focuses on improving the efficiency of block-sparse attention implementations for long-context transformer models. In particular, I study how token and block reordering techniques can transform irregular sparse attention patterns into hardware-friendly layouts that reduce the number of active blocks and improve GPU execution efficiency. The broader goal is to make sparse transformer inference more practical and scalable for large language models and other long-sequence learning workloads.

Filter:

Alternative Decomposed Message Passing for Efficient Geometric GNNs

2026

IEEE IPDPS 2026 Workshops (GrAPL)

This paper proposes an alternative decomposed message-passing framework for improving the efficiency of geometric graph neural networks. Instead of relying on concatenation-based message generation, the method decomposes node-, edge-, and angle-level transformations into reusable components, reducing redundant computation and memory overhead while preserving algebraic equivalence. The framework is designed for efficient GPU execution and serves as a drop-in replacement for representative architectures such as EGNN and CHGNet. Experimental results show up to 2x training speedup and 60% end-to-end memory reduction with no loss in accuracy across diverse geometric learning workloads.

Artificial Intelligence HPC

Transforming temporal-dynamic graphs into time-series data for solving event detection problems

2023

Turkish Journal of Electrical Engineering and Computer Sciences

This paper proposes a workflow for detecting important events in temporal-dynamic graphs by transforming graph snapshots into multivariate time-series data. The method first generates graph-level embeddings for each time step using temporal graph representation learning, and then applies unsupervised time-series anomaly detection models to identify abnormal events. The approach was evaluated on multiple real-world social media datasets and showed competitive or improved performance compared to prior event detection methods. The work demonstrates that graph embeddings can serve as an effective bridge between dynamic graph analysis and time-series anomaly detection.

Artificial Intelligence

Mentors

Collaborators