抖音黑料

EN

抖音黑料

信息论坛第116期:Safeguarding Privacy, Robustness and Intellectual Property of Machine Learning

2025年06月16日 16:12

时间:2025年6月18日(周三)11:00 地点:杨咏曼606会议室 报告内容简介: The growing complexity of deep neural network models in modern application domains (e.g., vision and language) necessitates a complex training process that involves extensive data, sophisticated design, and substantial computation. These inherently encapsulate the intellectual property (IP) of data and model owners, highlighting the urgent need to protect privacy, ensure model robustness, and safeguard proprietary rights of model owners during development, deployment, and post-deployment stages. In this talk, we will present our recent research surrounding holistic strategies for privacy preservation, model robustness verification, and model usage control, addressing challenges across the entire model lifecycle. Our approaches aim to advance responsible AI practice by ensuring secure and ethical utilization of AI systems. 报告人简介: Guangdong Bai is an Associate Professor at the University of Queensland, Australia. He obtained his PhD degree from the National University of Singapore, Bachelor and Master's degree from Peking University. His research spans trustworthy AI, system security, and privacy. His work has appeared in top security and software engineering venues such as IEEE S&P, NDSS, USENIX Security, ICSE, and FSE. He is an Associate Editor of IEEE Transactions on Dependable and Secure Computing.

上一条:宁波市智能感知芯片与系统重点实验室建设启动会暨第一届学术委员会会议 下一条:信息论坛第115期:Trustworthy AI Agents Require the Integration of Large Language Models and Formal Methods