CS 7775: Seminar in Computer Security: Machine Learning Security and Privacy Fall
2023 |
Instructors: § Instructor:
Alina Oprea (alinao) § TA: John
Abascal (abascal.j@northeastern.edu) Class Schedule: § Monday and Thursday,
11:45am-1:25pm ET §
Location: Hastings Suite 210 Office Hours: §
Alina: Thursday, 3-4pm ET and by appointment § John:
Monday, 2-3pm ET and by appointment Class forum: Slack Class policies: Academic integrity policy is strictly enforced.
Class Description: Machine learning techniques are increasingly being used for automated decisions in applications such as health care, finance, autonomous vehicles, personalized recommendations, and cyber security. These critical applications require strong guarantees on both the integrity of the machine learning models and the privacy of the user data used to train these models. Recently, foundation models such as large language models (LLMs) have been trained on massive datasets crawled from the web and are subsequently fine tuned to new tasks including summarization, translation, code generation, and conversational agents. This trend raises many concerns about the security of the foundation models and the new models derived from them, as well as the privacy of the data used to train these models. The area of adversarial machine learning studies the effect of adversarial attacks against machine learning models and aims to design robust mitigation algorithms to make ML trustworthy. In this seminar course, we will study a variety of adversarial attacks on machine learning, deep learning systems, and foundation models that impact the security and privacy of these systems, and we will discuss existing mitigations, and the challenges in making machine learning trustworthy. The objectives of the course are the following: § Provide an
overview of several machine learning models for classification and
regression, including logistic regression, SVM, decision trees, ensemble
learning, deep neural network architectures, federated learning,
reinforcement learning, and large language models. § Provide an
in-depth coverage of adversarial attacks on machine learning systems,
including evasion attacks at inference time, poisoning attacks at training
time, and privacy attacks. § Learn how to
classify the attacks according to the adversarial objective, knowledge, and
capability. Discuss taxonomy of attacks in adversarial ML based on the recent
NIST report. § Discuss new threat
models of adversarial attacks against foundation models and large language
models. § Understand
existing methods for training robust models and the challenges of achieving
bot robustness and accuracy. § Read recent,
state-of-the-art research papers from both security and machine learning
conferences and discuss them in class. Students will actively participate in
class discussions, and lead discussions on multiple papers during the
semester. § Provide
students the opportunity to complete several assignments on machine learning
security and privacy, and work on a semester-long research project on a topic
of their choice.
Pre-requisites:
§
Probability, calculus, and linear algebra §
Basic knowledge of machine learning Grading
The grade will be based on:
§ Assignments – 15% § Paper summaries– 10% § Discussion leading – 25% § Final project – 50%
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Review
materials § Probability
review notes from Stanford's machine learning class § Sam
Roweis's probability review § Linear
algebra review notes from Stanford's
machine learning class
Other resources
Books: § Trevor
Hastie, Rob Tibshirani, and Jerry Friedman. Elements
of Statistical Learning. Second Edition, Springer, 2009. § Christopher
Bishop. Pattern Recognition and Machine Learning. Springer,
2006. § A. Zhang, Z.
Lipton, and A. Smola. Dive
into Deep Learning § C. Dwork and
A. Roth. The Algorithmic Foundations of Differential Privacy § Shai
Ben-David and Shai Shalev-Shwartz. Understanding Machine Learning: From Theory to Algorithms |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|