CyLab Faculty, Students to Present at NDSS Symposium 2024
By Michael Cunningham
Carnegie Mellon faculty and students will present on a wide range of topics at the 31st Annual Network and Distributed System Security (NDSS) Symposium. Held at the Catamaran Resort Hotel & Spa in San Diego from February 26 through March 1, the event fosters information exchange among researchers and practitioners of network and distributed system security.
Bringing together hundreds of security educators, researchers and practitioners from all over the world, the NDSS Symposium encourages and enables the Internet community to apply, deploy, and advance the state of available security technologies.
Here, we've compiled a list of the papers co-authored by electrical and computer engineering andcCyLab Security and Privacy Institute members that are being presented at the event.
Attributions for ML-based ICS Anomaly Detection: From Theory to Practice
Clement Fung, Carnegie Mellon University; Eric Zeng, Carnegie Mellon University; Lujo Bauer, Carnegie Mellon University
Abstract: Industrial Control Systems (ICS) govern critical infrastructure like power plants and water treatment plants. ICS can be attacked through manipulations of its sensor or actuator values, causing physical harm. A promising technique for detecting such attacks is machine-learning-based anomaly detection, but it does not identify which sensor or actuator was manipulated and makes it difficult for ICS operators to diagnose the anomaly's root cause. Prior work has proposed using attribution methods to identify what features caused an ICS anomaly-detection model to raise an alarm, but it is unclear how well these attribution methods work in practice. In this paper, we compare state-of-the-art attribution methods for the ICS domain with real attacks from multiple datasets. We find that attribution methods for ICS anomaly detection do not perform as well as suggested in prior work and identify two main reasons. First, anomaly detectors often detect attacks either immediately or significantly after the attack start; we find that attributions computed at these detection points are inaccurate. Second, attribution accuracy varies greatly across attack properties, and attribution methods struggle with attacks on categorical-valued actuators. Despite these challenges, we find that ensembles of attributions can compensate for weaknesses in individual attribution methods. Towards practical use of attributions for ICS anomaly detection, we provide recommendations for researchers and practitioners, such as the need to evaluate attributions with diverse datasets and the potential for attributions in non-real-time workflows.
Group-based Robustness: A General Framework for Customized Robustness in the Real World
Weiran Lin, Carnegie Mellon University; Keane Lucas, Carnegie Mellon University; Neo Eyal, Tel Aviv University; Lujo Bauer, Carnegie Mellon University; Michael K. Reiter, Duke University; Mahmood Sharif, Tel Aviv University
Abstract: Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. Specifically, we find that conventional metrics measuring targeted and untargeted robustness do not appropriately reflect a model's ability to withstand attacks from one set of source classes to another set of target classes. To address the shortcomings of existing methods, we formally define a new metric, group-based robustness, that complements existing metrics and is better-suited for evaluating model performance in certain attack scenarios. We show empirically that group-based robustness allows us to distinguish between models' vulnerability against specific threat models in situations where traditional robustness metrics do not apply. Moreover, to measure group-based robustness efficiently and accurately, we 1) propose two loss functions and 2) identify three new attack strategies. We show empirically that with comparable success rates, finding evasive samples using our new loss functions saves computation by a factor as large as the number of targeted classes, and finding evasive samples using our new attack strategies saves time by up to 99% compared to brute-force search methods. Finally, we propose a defense method that increases group-based robustness by up to 3.52 times.
TrustSketch: Trustworthy Sketch-based Telemetry on Cloud Hosts
Zhuo Cheng, Carnegie Mellon University; Maria Apostolaki, Princeton University; Zaoxing Liu, University of Maryland; Vyas Sekar, Carnegie Mellon University
Abstract: Cloud providers deploy telemetry tools in software to perform end-host network analytics. Recent efforts show that sketches, a kind of approximate data structure, are a promising basis for software-based telemetry, as they provide high fidelity for many statistics with a low resource footprint. However, an attacker can compromise sketch-based telemetry results via software vulnerabilities. Consequently, they can nullify the use of telemetry; e.g., avoiding attack detection or inducing accounting discrepancies. In this paper, we formally define the requirements for trustworthy sketch-based telemetry and show that prior work cannot meet those due to the sketch's probabilistic nature and performance requirements. We present the design and implementation TRUSTSKETCH, a general framework for trustworthy sketch telemetry that can support a wide spectrum of sketching algorithms. We show that TRUSTSKETCH is able to detect a wide range of attacks on sketch-based telemetry in a timely fashion while incurring only minimal overhead.