Program Overview |
|
Main Conference |
2022 January 5 – 7 (Wednesday - Friday) |
Workshops and Tutorials |
2022 January 4 and 8 (Tuesday & Saturday) |
Program Guide PDF
Full Program PDF
Keynote Speakers |
Ahmed Elgammal, Professor at the Department of Computer Science, Rutgers University |
Kristen Grauman, Professor in the Department of Computer Science, University of Texas at Austin |
Zhengyou Zhang, Director of AI Lab and Robotics X, Tencent |
Keynote Wednesday, January 5th 5:00 PM HST |
Kristen Grauman, Professor in the Department of Computer Science at the University of Texas at Austin and a Research Scientist in Facebook AI Research (FAIR)
Style and Influence from In-the-Wild Fashion PhotosThe fashion domain is a magnet for computer vision. New vision problems are emerging in step with the fashion industry's rapid evolution towards an online, social, and personalized business. Style models, trend forecasting, and recommendation all require visual understanding with rich detail and subtlety. I will present our work developing computer vision methods for fashion. To begin, we explore how to discover styles from Web photos, so as to optimize mix-and-match wardrobes, suggest minimal edits to make an outfit more fashionable, or recommend clothing that flatters diverse human body shapes. Next, turning to the world stage, we investigate fashion forecasting and influence. Learned directly from photos, our models discover the “underground map” of a city based on the different clothes people wear, and they forecast what styles will be popular in the future by capturing how trends propagate across 44 major world cities. Finally, building on this notion of fashion influence, we quantify which cultural factors (as captured by millions of news articles) most affect the clothes people choose to wear across a century of fashion photos.
Kristen Grauman is a Professor in the Department of Computer Science at the University of Texas at Austin and a Research Scientist in Facebook AI Research (FAIR). Her research in computer vision and machine learning focuses on video, visual recognition, and active perception or embodied AI. Before joining UT-Austin in 2007, she received her Ph.D. at MIT. She is an IEEE Fellow, AAAI Fellow, Sloan Fellow, and a recipient of NSF CAREER and ONR Young Investigator awards, the PAMI Young Researcher Award in 2013, the 2013 Computers and Thought Award from the International Joint Conference on Artificial Intelligence (IJCAI), and the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2013. She was inducted into the UT Academy of Distinguished Teachers in 2017. She and her collaborators have been recognized with several Best Paper awards in computer vision, including a 2011 Marr Prize and a 2017 Helmholtz Prize (test of time award). She has served as an Associate Editor-in-Chief for PAMI and a Program Chair for CVPR 2015 and NeurIPS 2018. |
Keynote Thursday, January 6th 5:00 PM HST |
Ahmed Elgammal, Professor at the Department of Computer Science, Rutgers University
Art at the Age of AI
In this talk, I will present results of recent research activities at the Art and Artificial Intelligence Laboratory at Rutgers University. We investigate perceptual and cognitive tasks related to human creativity in visual art. In particular, we study problems related to art styles, influence, and the quantification of creativity. We develop computational models that aim at providing answers to questions about what characterizes the sequence and evolution of changes in style over time. The talk will also cover advances in using AI for art and music generation.
Dr. Ahmed Elgammal is a professor at the Department of Computer Science, Rutgers University. He is a professor, researcher and entrepreneur whose pioneering work explores whether AI can be creative without human intervention. He is the founder and director of the Art and Artificial Intelligence Laboratory at Rutgers. He is also an Executive Council Faculty at Rutgers University Center for Cognitive Science. Dr. Elgammal is the founder and CEO of Artrendex, a startup that builds innovative AI technology for the art market. |
Keynote Friday, January 7th 5:00 PM HST |
Dr. Zhengyou Zhang, Director of AI Lab and Robotics X, Tencent
Integrated Physical-Digital World and Digital Human
With the rapid development of digital technologies such as AI, VR, AR, XR, and more importantly the almost ubiquitous mobile broadband coverage, we are entering an Integrated Physical-Digital World (IPhD), the tight integration of virtual world with the physical world. The IPhD is characterized with four key technologies: Virtualization of the physical world, Realization of the virtual world, Holographic internet, and Intelligent Agent. Internet will continue its development with faster speed and broader bandwidth, and will eventually be able to communicate holographic contents including 3D shape, appearance, spatial audio, touch sensing and smell. Intelligent agents, such as digital human, and digital/physical robots, travels between digital and physical worlds. In this talk, we will describe our work on IPhD and especially digital human for this IPhD world. This includes: computer vision techniques for building digital humans, multimodal text-to-speech synthesis (voice and lip shapes), speech-driven face animation, neural-network-based body motion control, human-digital-human interaction, and an emotional video game anchor.
Dr. Zhengyou Zhang is currently serving as Director of AI Lab and Robotics X, Tencent. He is an ACM Fellow (Association for Computing Machinery Fellow) and an IEEE Fellow (Institute of Electrical and Electronics Engineers Fellow). He is also a world-renowned expert in computer vision and multimedia technology and has made pioneering contributions in stereo vision, motion analysis, camera calibration, robot navigation, immersive remote interaction along with other areas of contribution. He has published more than 250 papers in top international conferences and journals, which have been cited more than 56,000 times, and he holds nearly 200 issued patents. He received the IEEE Helmholtz Test of Time Award in 2013 for his “Zhang’s method”. |