Wednesday May 19th
Matt Welsh, OctoML
Machine learning models are notorious for being computationally expensive. This is especially a problem when running ML models on computationally limited edge devices. The standard way to deal with this is to use vendor-specific, hand-tuned libraries, which provide fast implementations of common ML operators, such as convolution and pooling. However, it’s a huge amount of engineering effort to build and maintain these libraries for a wide range of devices and network operators. And even a good hand-tuned implementation may not perform optimally in all cases.
Apache TVM takes a different approach -- it automatically generates fast binary code for any model, on any device, by exploring a large search space of potential optimizations. TVM itself uses machine learning to guide its code synthesis process, saving months of engineering time. The code generated by TVM can be many times faster than hand-optimized libraries -- in some cases exceeding a speedup of 30x over hand-tuned code. TVM is an open source project with hundreds of contributors and an active developer community.
In this talk, I will give an overview of how Apache TVM delivers this level of performance, and how we are using it at OctoML to develop the Octomizer, a cloud-based, automatic ML model compiler for edge and server devices.
Matt Welsh is the VP of Engineering at OctoML, a Seattle-based startup founded by a team from University of Washington and the inventors of Apache TVM. Matt’s research interests focus on machine learning systems, mobile computing, and distributed systems. Prior to OctoML, Matt has been a professor at Harvard, an engineering director at Google, and an engineering lead at Apple and Xnor.ai. He received his PhD from UC Berkeley.
Thursday, May 20th
Keynote Presentation: Toward AI-enhanced Design of Resilient Cyber-Physical Systems: a Journey from Inception to Present Times
Bruno Sinopoli, Washington University in St. Louis
Cyber-Physical Systems have been instrumental in bringing together talented researchers from different domains to focus their attention on developing a paradigm capable of addressing modern real-world system design issues, as separation of concerns does not constitute a realistic assumption, due to the close interplay of sensing, communication, computing and decision making. As a result, system-level research has become more relevant and impactful. In this talk I will provide a personal view of the progress made in CPS since inception and provide a perspective on where the field is headed. In particular I will focus on the issue of guaranteeing resilience and trustworthiness while leveraging modern data driven methods in the presence of large uncertainties and adversarial actions.
Bruno Sinopoli is the Das Family Distinguished Professor at Washington University in St. Louis, where he is also the founding director of the center for Trustworthy AI in Cyber-Physical Systems and chair of the Electrical and Systems Engineering Department. He received the Dr. Eng. degree from the University of Padova in 1998 and his M.S. and Ph.D. in Electrical Engineering from the University of California at Berkeley, in 2003 and 2005 respectively. After a postdoctoral position at Stanford University, Dr. Sinopoli was member of the faculty at Carnegie Mellon University from 2007 to 2019, where he was a professor in the Department of Electrical and Computer Engineering with courtesy appointments in Mechanical Engineering and in the Robotics Institute and co-director of the Smart Infrastructure Institute. His research interests include modeling, analysis and design of Resilient Cyber-Physical Systems with applications to Smart Interdependent Infrastructures Systems, such as Energy and Transportation, Internet of Things and control of computing systems.
Friday, May 21st
Rupak Majumdar, Max Planck Institute for Software Systems
CPS applications tightly integrate computation, communication, geometric reasoning, and dynamics. Despite many advances in robotics and programming languages research individually, we still lack language abstractions and reasoning principles for such systems. In this talk, I will describe work on the PGCD project, where we develop programming models and reasoning principles for multi-robot coordination. PGCD models concurrent components communicating through message passing as well as executing continuous-time motion primitives in physical space. We show how a combination of choreography types, assume-guarantee reasoning, and reactive synthesis can lead to formal verification of correctness for global specifications. We have compiled PGCD programs to ROS and show that it is feasible to verify non-trivial multi-robot coordination programs.
Rupak Majumdar is a Scientific Director at the Max Planck Institute for Software Systems. His research interests are in formal methods and cyber-physical systems. He received “Most Influential Paper” awards from POPL and PLDI for his work in software verification. His research is funded in part by the German Science Foundations’s Collaborative Research Center on Perspicuous Systems.