Skip to main content
Doctoral College

2023 IEEE Conference on Secure and Trustworthy Machine Learning – Call for Papers

The first edition of this new conference will take place in 8-10 Feb 2023. It will focus on the theoretical and practical understandings of vulnerabilities inherent to ML systems, explore the robustness of ML algorithms and systems, and aid in developing a unified, coherent scientific community which aims to build trustworthy ML systems. More details can be found on their website https://satml.org/ 

Program committee co-chairs: Patrick McDaniel and Nicolas Papernot 

Key Dates

  • Abstracts due for Paper submissions: Monday August 22, 2022 (11:59 PM AoE, UTC-12) 
  • Paper submission: Thursday September 1, 2022 (11:59 PM AoE, UTC-12) 
  • Paper notification: Tuesday, November 15, 2022 
  • Camera-ready versions of Papers and Abstracts: Monday December 15, 2022
  • Conference: Wednesday February 8 to Friday February 10, 2023 

We solicit research papers, systematization of knowledge papers, and position papers. 

Published:

Areas of Interest include (but are not limited to):

  • Trustworthy data curation
  • Novel attacks on ML systems
  • Methods for defending against attacks on ML systems
  • Forensic analysis of ML systems
  • Verifying properties of ML systems
  • Securely and safely integrating ML into systems
  • Privacy (eg confidentiality, inference privacy, machine unlearning)
  • Fairness
  • Accountability
  • Transparency
  • Interpretability

Submission Categories
Research Papers, up to 12 pages of body text, with unlimited additional space for references and well-marked appendices. These must be well-argued and worthy of publication and citation, on the topics above. Research papers must present new work, evidence, or ideas.
Systematization of Knowledge papers, up to 12 pages of body text, should provide an integration and clarification of ideas on an established, major research area, support or challenge long-held beliefs in such an area with compelling evidence, or present a convincing, comprehensive new taxonomy of some aspect of secure and trustworthy machine learning. 
 
Position papers with novel visions, with a minimum of 5 pages of body text, will also be considered. Reviewers will be asked to evaluate vision as bringing opinions and views that pertain to issues of broad interest to the computing community, typically, but not exclusively, of a nontechnical nature. Controversial issues will not be avoided but be dealt with fairly. Authors are welcome to submit carefully reasoned “Viewpoints” in which positions are substantiated by facts or principled arguments. Vision may relate to the wide and abundant spectrum of the computing field of trustworthy machine learning—its open challenges, technical visions and perspectives, educational aspects, societal impact, significant applications and research results of high significance and broad interest.  Position papers should set the background and provide introductory references, define fundamental concepts, compare alternate approaches, and explain the significance or application of a particular technology or result by means of well-reasoned text and pertinent graphical material. The use of sidebars to illustrate significant points is encouraged.   
 
While a paper is under submission to this conference, authors may choose to give talks about their work, post a preprint of the paper online, and disclose security vulnerabilities to vendors.

 

 

Back to top