Privacy Preserving Split Learning
Aqsa Shabbir
Master Student
(Supervisor: Asst.Prof.Sinem Sav) Computer Engineering Department
Bilkent University
Abstract: Split Learning enables collaborative model training without sharing raw data; however, its traditional form remains vulnerable because plaintext activations and gradients can leak sensitive information. These leakages enable attacks such as input reconstruction, label and property inference, and model manipulation, undermining the privacy guarantees that split learning aims to provide. This thesis addresses these limitations by designing a privacy-preserving split learning system. The proposed design inverts the conventional workflow so that labels, loss computation, and backpropagation remain entirely on the client, while all server-side computation is performed in the encrypted domain using homomorphic encryption. As a result, the server never observes plaintext activations, labels, or gradients during training, eliminating known attack surfaces. To support practical deployment, the thesis develops an estimator that models encrypted computation cost and enables efficient, budget-aware split selection without exhaustive empirical tuning. Our contributions include: (i) Identifying and analyzing the components of traditional split learning that lead to privacy leakage, (ii) Designing an inverted split learning system that eliminates information leakage by executing all server-side computation over encrypted data, and (iii) Developing an estimator that enables the efficient use of homomorphic encryption in split learning under cryptographic and computational constraints.
DATE: January 12, Monday @ 10:00 Place: EA 409