$0 – $106.46

Toronto Machine Learning Society (TMLS) : 2021 Annual Virtual Conference

Actions and Detail Panel

$0 – $106.46

Event Information

Share this event

Date and time

Location

Location

Online event

Refund policy

Refund policy

Refunds up to 30 days before event

Event description
A unique experience to upskill and learn from industry and academics in our community

About this event


		Toronto Machine Learning Society (TMLS) : 2021 Annual Virtual Conference image

		Toronto Machine Learning Society (TMLS) : 2021 Annual Virtual Conference image

		Toronto Machine Learning Society (TMLS) : 2021 Annual Virtual Conference image

		Toronto Machine Learning Society (TMLS) : 2021 Annual Virtual Conference image

Despite the vast opportunities that lie within our data, there are explicit challenges, both technical & strategic.

You're invited to join us at #TMLS2021

We'll address common hurdles and celebrate the accomplishments from our community of over 10,000 practitioners, academics & strategists, as we strive to advance your working potential of ML/AI through shared;

  • Workshops
  • Interactive Conference Presentations
  • P2P Networking
  • Career Opportunities and Hiring

Speakers include Senior Leaders, Researchers from Vector, CIFAR, Google, Apple, LinkedIn, HuggingFace, Facebook AI and more

* 2021 Key themes include

  • Transfer Learning
  • Feature Engineering
  • Feature Store Design and Maintenance
  • Transformers
  • Explainability
  • Model Monitoring
  • MLOps
  • Real business impact case studies In Finance, Insurance, Security, Retail, Telecomm and much more!

FOR BULK TICKETS EMAIL INFO@TORONTOMACHINELEARNING.COM

Each ticket includes:

  • Access to 80+ hours of live-streamed content (incl. recordings)
  • Talks for beginners/intermediate & advanced
  • Network and connect through our event app
  • Q+A with speakers
  • Channels to share your work with the community
  • Run your chat groups and virtual gatherings!
  • Hands-on Workshops

PLEASE NOTE THAT BONUS WORKSHOPS ARE ON NOVEMBER 15th AND 16th WHILE THE CONFERENCE BREAKOUTS (ON HOPIN.TO PLATFORM) ARE ON NOVEMBER 17th and 18th

Taken from the real-life experiences of our community, the Steering Committee has selected the top applications, achievements, and knowledge-areas to highlight.

Come and expand your network with machine learning experts and further your own personal & professional development in this exciting and rewarding field. 

We believe these events should be as accessible as possible and set our ticket passes accordingly.

The TMLS initiative is dedicated to helping promote the development of AI/ML effectively, and responsibly across all Industries. As well, to help data practitioners, researchers and students fast-track their learning process and develop rewarding careers in the field of ML and AI. 

What to expect at TMLS;

Business Leaders, including C-level executives and non-tech leaders, will explore immediate opportunities, and define clear next steps for building their business advantage around their data.

Practitioners will dissect technical approaches, case studies, tools, and techniques to explore challenges within Natural Language Processing, Neural Nets, Reinforcement Learning, Generative Adversarial Networks (GANs), Evolution Strategies, AutoML, and more.

Researchers will have the opportunity to share with their peer's cutting-edge advancements in the field.

Machine learning, deep learning, and AI are some of the fastest-growing and most exciting areas for knowledge workers - simultaneously, they are the key to untapped revenue sources and strategic insights for businesses. Firms are using AI to create unprecedented business advantages that are reshaping the global - but more specifically Canadian - economic landscape. Practitioners are leveraging and expanding their expertise to become high-impact global leaders.

Despite the vast opportunities that lie within our data, there are also explicit challenges to revealing their potential. Furthermore, transitioning to a career in practicing AL/ML, or managing ML and AI-driven businesses, are less than straightforward.

Why should I attend the Toronto Machine Learning Society (TMLS) 2021 Annual Conference & Expo:

Developments in the field are happening fast: For practitioners, it's important to stay on top of the latest advances; for business leaders, the implementation of new technology brings specific challenges.

The goal of TMLS is to empower data practitioners, academics, engineers, and business leaders with direct contact to the people that matter most, and the practical information to help advance your projects. For data practitioners, you'll hear how to cut through the noise and find innovative solutions to technical challenges, learning from workshops, case studies, and P2P interactions. Business leaders will learn from the experience of those who have successfully implemented ML/AI and actively manage data teams.

Seminar series content will be practical, non-sponsored, and tailored to our ML ecosystem. TMLS is not a sales pitch - it's a connection to a deep community that is committed to advancing ML/AI and to create and deliver value and exciting careers for Businesses and Individuals.

We're committed to helping you get the most out of the TMLS.

Joining together under one roof will be:

  • Machine Learning/deep learning PhDs and researchers
  • C-level business leaders
  • Industry experts
  • Data Engineers, Machine Learning Engineers
  • Enterprise innovation labs seeking to grow their teams
  • Community and university machine learning groups

Site: www.torontomachinelearning.com 

Steering Committee & Team 

Who Attends

FAQs

Q: What are the technical requirements to be able to participate?

Laptop or personal computer, strong, reliable wifi connection. Google Chrome is recommended to run the Virtual Conference platform.

Q: Can I watch the live stream sessions on my phone or tablet computer?

Yes, the Virtual Conference is accessible via a smartphone or tablet.

Q: Which sessions are going to be recorded? When will the recordings be available and do I have access to them?

All sessions will be recorded during the event (provided speaker permissions) and will be made available to attendees approximately 2-4 weeks after the event and be available for 12 months after release.

Q: Are there ID or minimum age requirements to enter the event? There is not. Everyone is welcome.

Q: Can I get a training certificate?  Yes, we can provide this upon request.

Q: How can I contact the organizer with any questions? Please email info@torontomachinelearning.com

Q: What's the refund policy? Tickets are refundable up to 30 days before the event.

Q: Who will attend? The event will have three tracks: One for Business, one for Advanced Practitioners/Researchers, and one for applied use-cases (Focusing on various Industries). Business Executives, Ph.D. Researchers, Engineers, and Practitioners ranging from Beginner to Advanced. See Attendee Demographics and a list of the Attendee Titles from our past event here. 

Q: Will you focus on any industries in particular? Yes, we will have talks that cover Finance, Healthcare, Retail, Transportation, and other key industries where applied ML has made an impact. 

Q: Can I speak at the event? Yes, you can submit an abstract here. Deadline to submit a talk is Oct 15th, however, we will continue to review submissions. 

*Content is non-commercial and speaking spots cannot be purchased. 

Q: Will you give out the attendee list? No, we do our best to ensure attendees are not inundated with messages, We allow attendees to stay in contact through our slack channel and follow-up monthly socials.

Q: Can my company have a display? Yes, there will be spaces for company displays. You can inquire at faraz@torontomachinelearning.com.

Current Confirmed Submissions:

Adam Harvey, Independent Researcher - Exposing.ai

Researchers Gone Wild

This talk will discuss the Exposing.ai research project, a multi-year investigation into the origins and endpoints of biometric image training datasets created from "media in the wild". Over the past years several prominent datasets, including MS-Celeb-1M, DukeMTMC, VGGFace2, and MegaFace have been retracted, heavily criticized, or mysteriously deprecated without explanation.

Driving these takedowns is the research project exposing.ai that has uncovered how and why academic biometric datasets are being exploited by the global biometric surveillance industry. As a recent [Nature article](https://www.nature.com/articles/d41586-020-03187-3) has explained, many researchers are reconsidering their use of these datasets and whether it's appropriate for academia to engage in this research. This talk will discuss several of the most egregious dataset missteps and provide a survey of recent trends and papers about improving dataset authorship.

Speaker's Bio:

Adam Harvey (US/DE) is a researcher and artist based in Berlin focused on computer vision, privacy, and surveillance technologies. He received his masters degree from the Interactive Telecommunications Program at New York University (2010) and a BA in Integrative Arts from Pennsylvania State University (2004). His previous work includes CV Dazzle (camouflage from face recognition), the Anti-Drone Burqa (camouflage from thermal cameras), SkyLift (geolocation spoofing device), and Exposing.ai (interrogating face recognition datasets). His art and research has been featured widely in media publications including the Economist, New York Times, Financial Times, Süddeutsche Zeitung, Der Spiegel, Wall Street Journal, and the Washington Post. Harvey is the founder of VFRAME.io, a software project that innovates computer vision technology for human rights researchers and investigative journalist, which received an award of distinction from Ars Electronica and nomination for the Beazley Design of the Year award in 2019.

--

Elizabeth Adams, Chief AI Ethics & Culture Advisor/Affiliate Fellow - Women in AI/Institute of Human-Centered AI

Leadership of Responsible AI - The Case for Inclusive Tech

Artificial Intelligence (AI) is changing how we live and how we work. As we advance into the 4th Industrial Revolution, the digital age, AI enabled technologies have the power to be a force for good.

These technologies can also amplify forms of inequality, discrimination, and bias. This presentation will highlight cases of how algorithmic bias happens which can have significant impacts to society.

--

Nima Safaei, Sr. Data Scientist and Taha Jaffer, Head of Wholesale Banking and Global Treasure AI, Scotiabank

Trade-off between Optimality and Explainability

One of the top challenges in AI/ML is the black box models cannot be trusted in high-risk areas due to lack of explainability. Generally speaking, Explainability in ML is two folded, Casual Explainability (also known as interpretability) and Counterfactual Explainability. While the former addresses ‘why’, the latter addresses ‘how’ small and plausible perturbations of the input modify the output? The author’s focus is on Counterfactual Explainability from the optimization lens.

In ML, the learning phase is actually a constrained optimization problem where a given objective (a.k.a, loose) function must be optimized in terms of some constraints, e.g., regularization, lasso, dropout, etc. Thus, from the constrained optimization lens, the explainability in fact refers to ‘sensitivity analysis’ or ‘post optimality’ practice. Using post optimality, we should focus on those learning coefficients that have a narrow range of optimality and coefficients near the endpoints of the range. However, the key conjecture in post optimality is: the optimization (learning) algorithm guarantees the ‘global’ or near-global optimum. Indeed, the majority of optimization algorithms in ML cannot guarantee the global optimum due to uneven (non-convex) loos surfaces or stochastic nature of the method. Actually, non-convexity and stochasticity are two sides of the complexity coin.

In this talk, the author argues that there is a trade-off between model explainability and accuracy. The lack of global optimum guarantee is the key reason why high-accurate (and mostly black box) models are not explainable. The above trade-off raises a critical discussion during the mode selection phase: a more explainable but less accurate model is better than a less explainable but more accurate model!?

Speakers' Bio:

Nima has a Ph.D. in system and industrial engineering with a background in Applied Mathematics. He held a postdoctoral position at C-MORE Lab (Center for Maintenance Optimization & Reliability Engineering), University of Toronto, Canada, working on machine learning and Operations Research (ML/OR) projects in collaboration with various industry and service sectors. He was with Department of Maintenance Support and Planning, Bombardier Aerospace with a focus on ML/OR methods for reliability/survival analysis, maintenance, and airline operations optimization. Nima is currently with Data Science & Analytics (DSA) lab, Scotiabank, Toronto, Canada, as senior data scientist. He has more than 40 peer-reviewed articles and book chapters published in top-tier journals as well as one published patent. He also invited to present his findings in some ML top conferences such as GRAPH+AI 2020, NVIDIA GTC 2020/2021, and ICML 2021.

--

Alexander Lavin, Founder & Chief Technologist, Institute for Simulation Intelligence

Towards Machine Intelligence Capable of Nobel-caliber Science

In the past half-century, advances in computation have accelerated scientific progress and innovation in diverse fields at all scales, from particle physics to socioeconomics to cosmology. However, recent works point to stagnating innovation and diminishing returns in science (Cowen & Southwood '19). Can advancements in artificial intelligence (AI) and machine learning (ML) help push science, engineering, and other fields through existing bottlenecks (be they physical, computational, or otherwise)? Is AI-driven science necessary for humankind to decode and solve its greatest challenges, such as nuclear fusion and neurodegenerative disease?

The application of AI in scientific discovery presents very different challenges relative to popular environments such as game-playing and machine translation. In general, scientific discoveries require hypothesis and solution spaces that are orders of magnitude larger than existing AI environments, a far more elaborate verification process, and non-trivial integration with scientific materials and machines.

In this talk, leading AI researcher Alexander Lavin will discuss the challenges and opportunities in AI-driven science, and further propose a "Nobel-Turing Challenge": AI systems capable of making Nobel-caliber discoveries in science. Lavin will present several key areas of AI/ML, simulation, and computing to advance towards this goal, recent progress in areas such as chemistry and climate, and critical operational aspects like human-machine teaming and systems engineering in AI.

Speaker's Bio:

Alexander Lavin is a world-leading AI researcher and software engineer, specializing in probabilistic machine learning, scientific computing, and human-centric AI systems. He is the Founder and Chief Technologist of the Institute for Simulation Intelligence, a public-benefit "focused research organization" aiming to reshape the scientific method for the machine age, building novel technologies to synergise AI and simulation in areas such as climate, synbio, nuclear energy, and more. Lavin has explored AI and probabilistic computation via several perspectives: theoretical neuroscience and online learning with Numenta, general intelligence in robotics and computer vision with Vicarious AI, predictive medicines and causality as the founder of Latent Sciences (acquired), Earth systems and climate with NASA as an AI Advisor, and autonomous systems with Astrobotic. Lavin earned his Masters in Mechanical Engineering at Carnegie Mellon, a Masters in Engineering Management with Duke University, and Bachelors in Mech & Aero Engineering at Cornell University. He has won several awards for work in rocket science and space robotics, published in top journals and conferences across AI/ML and neuroscience, and was an honoree for the Forbes 30 Under 30 List in Science and the Patrick J. McGovern Tech for Humanity Prize. In his free time, Lavin enjoys running, yoga, live music, and reading sci-fi and theoretical physics books.

--

Gijsbert Janssen van Doorn, IDirector of Technical Product Marketing, Run:AI

Unleash the Power of Your GPUs with Run:AI

Did you know that less than half of all AI models make it to production? Making full use of your existing AI infrastructure is even harder. In this session accessible for all levels of AI practitioners, their executive leaders and IT, learn how Run:AI’s compute orchestration platform puts your GPU horsepower to work to make the most of your AI investment. We’ll show real-life examples of GPU utilization at several organizations running AI workloads. Come see for yourself how Run:AI helps you complete more experiments, train models faster and run inference workloads with ease!

Speaker's Bio:

Gijsbert Janssen van Doorn is the Director Technical Product Marketing at Run:AI. In this role, he’s responsible for exciting audiences about the potential of Run:AI, acting as an advocate for the technology that will shape the future of how organisations run AI. Gijsbert comes from a technical engineering background. Prior to Run:AI, he worked 6 years at Zerto, a Cloud Data Management and Protection vendor, where he held multiple roles, from Systems Engineer to Director Technical Marketing. Gijsbert describes himself as a passionate technologist, whose professional motto is: “Never stop learning and never stop having fun.”

--

Robert Monarch, Author, Human-in-the-Loop Machine Learning

Unsolved Problems in Human-in-the-Loop Machine Learning

This talk will feature excerpts from my recently published book "Human-in-the-Loop Machine Learning: Active learning and annotation for human-centered AI". I'll cover some of the most exciting problems in Human-in-the-Loop Machine Learning and promising recent advances that address some of these problems. The talk will start with one of the most basic and long-standing questions in machine learning: what are the different ways that we can interpret uncertainty in our models? The talk will then discuss recent advances in transfer learning, including active transfer learning for adaptive sampling and the implications of intermediate task transfer learning on the choice of annotation task and annotation workforce(s). Finally, I will talk about advances in annotation quality control and annotation interfaces, including ways to identify annotators with rare but valid subjective interpretations and human-computer interaction strategies for combining machine learning predictions with human annotations.

Speaker's Bio:

Robert Monarch is an expert in combining Human and Machine Intelligence, working with Machine Learning approaches to Text, Speech, Image and Video Processing. Robert has founded several AI companies, building some of the top teams in Artificial Intelligence. He has worked in many diverse environments, from Sierra Leone, Haiti and the Amazon, to London, Sydney and Silicon Valley, in organizations ranging from startups to the United Nations. He has shipped Machine Learning Products at startups and at/with Amazon, Apple, Google, IBM & Microsoft.

Robert has published more than 50 papers on Artificial Intelligence and is a regular speaker about technology in an increasingly connected world. He has a PhD from Stanford University. Robert is the author of Human-in-the-Loop Machine Learning (Manning Publications, 2021)

--

Delina Ivanova, Senior Manager, Data, Analytics & Insights, HelloFresh Canada

How HelloFresh Leverages Feature Engineering and Modelling Techniques to Inform Menu Design

With advancements in technology, specifically around an organization's ability to make sense of data, consumers are growing more accustomed to personalized, on-demand solutions. In business like HelloFresh where the objective is to offer just that - solutions - product design and re-design is a continuous process. The main challenge in a business which designs, produces, and delivers its own product, is continually understanding customer preferences and customer-base composition, and creating product options which satisfy a variety of needs without compromising operational efficiencies or creating in-feasibility in the supply chain. This talk will focus on our approach to understanding customer preferences, specifically related to recipe and menu composition, and how we use various engineering techniques to enhance available features. These features are used in a classification model which predicts individual recipe scores during the recipe design process, and can predict overall menu performance based on recipe combinations. The purpose of this work is to take a data-driven approach to product design, minimize trial and error for our culinary team, and align product design with customer acquisition and growth strategies.

Speaker's Bio:

Delina has over 10 years of experience in data analytics across a variety of domains, including financial services and CPG, and functions, including product management, operations, and marketing. She currently leads a full service data team at HelloFresh Canada, supporting all business functions with engineering, reporting and modelling needs to improve decision making in revenue growth and cost management.

--

Lili Mou, Assistant Professor, University of Alberta

Unsupervised Text Generation: Techniques and Applications

In this talk, I will present a novel search-and-learning framework for unsupervised text generation. We define a heuristic scoring function that (roughly) estimates the quality of a candidate sentence for a task, and then perform stochastic local search (such as simulated annealing) to generate an output sentence. We also learn a sequence-to-sequence model that learns from the search results to improve inference efficiency and to smooth out search noise. Our search-and-learning framework shows high unsupervised performance in various natural language generation applications. Our technique should be useful in various industrial applications, especially for startups and the cold-start of new tasks.

Speaker's Bio:

Dr. Lili Mou is an Assistant Professor at the Department of Computing Science, University of Alberta. He is also an Alberta Machine Intelligence Institute (Amii) Fellow and a Canada CIFAR AI (CCAI) Chair. Lili received his BS and PhD degrees in 2012 and 2017, respectively, from School of EECS, Peking University. After that, he worked as a postdoctoral fellow at the University of Waterloo and a research scientist at Adeptmind (a startup in Toronto, Canada). His research interests include deep learning applied to natural language processing as well as programming language processing. He has publications at top conferences and journals, including AAAI, ACL, CIKM, COLING, EMNLP, ICASSP, ICLR, ICML, IJCAI, INTERSPEECH, NAACL-HLT, NeruIPS, and TACL (in alphabetic order). He also has tutorials presented at EMNLP-IJCNLP'19 and ACL'20.

--

Emanuele Rossi, Machine Learning Researcher, Twitter

Graph Neural Networks with Almost No Features

We propose a simple method to handle missing features in a graph which is based on feature propagation and is compatible with any GNN model. Our methods outperforms previously proposed approaches in both node-classification and link-prediction tasks, and is able to perform well even when 99% of the features are missing. Moreover, our approach is extremely scalable, running on a dataset with two million nodes in 10 seconds. We theoretically analyze our approach using tools from compressed sensing, showing that it acts as a low pass filter and finding guarantees on how well we can reconstruct the features.

Speaker's Bio:

Emanuele is a Machine Learning Researcher at Twitter, as well as PhD student working on Graph Neural Networks at Imperial College London and supervised by Prof. Michael Bronstein. Before joining Twitter, he was working at Fabula AI, which was then acquired by Twitter in June 2019. Previously, Emanuele completed an MPhil at the University of Cambridge and a BEng at Imperial College London, both in Computer Science.

--

Ashley Varghese, Data Scientist, Canadian National Railway (CN)

Leveraging Novel Computer Vision and Machine Learning Solutions for Visual Inspection at Canadian National Railway (CN)

Through this talk, the audience will be introduced to some of the complex problems that impedes the operational efficiency of railway systems. Canadian National Railway(CN) extends from coast to coast across North America. It is important to ensure that the rail cars and railroads are efficiently used with minimal maintenance time. With a network as wide as CN, carrying manual inspections is prohibitively expensive and time consuming. Leveraging computer vision and machine learning to build a novel solution for automated visual inspection would result in higher operational efficiency of the CN.

Speaker's Bio:

Ashley Varghese is a Data Scientist at Canadian National Railway. She works in the Automated Inspection Program overseeing the development and retraining aspects for the inspection of rail cars. She has over 10 years' of research experience in computer vision and deep learning. Her research papers are published in multiple international conferences and journals. She has previously worked as an AI scientist at Industrial Skyworks and as a Researcher at the TCS Innovation Lab. She holds an MTech in Computer Science from the International Institution of Information Technology Bangalore(IIITB)

--

Stephen Zheng, Lead Research Scientist, Salesforce

The AI Economist x WarpDrive: Designing Economic Policy using High-Throughput Multi-Agent Reinforcement Learning

How you can do economic analysis and design economic policy using multi-agent reinforcement learning in hardware-accelerated economic simulations.

Speaker's Bio:

Stephan Zheng (www.stephanzheng.com) is a Lead Research Scientist and heads the AI Economist team at Salesforce Research. He currently works on using deep reinforcement learning and economic simulations to design economic policy. His work has been widely covered in the media, including the Financial Times, Axios, Forbes, Zeit, Volkskrant, MIT Tech Review, and others. He holds a Ph.D. in Physics from Caltech (2018), where he worked on imitation learning of NBA basketball players and neural network robustness, amongst others. He was twice a research intern with Google Research and Google Brain. Before machine learning, he studied mathematics and theoretical physics at the University of Cambridge, Harvard University, and Utrecht University. He received the Lorenz graduation prize from the Royal Netherlands Academy of Arts and Sciences for his master's thesis on exotic dualities in topological string theory and was twice awarded the Dutch national Huygens scholarship.

--

Michael Yan, Staff Data Scientist and Greg Svitak, Director of Labs, Facebook AI, SpotHero

Changing the Face of Parking: Lesson's Learned from SpotHero's Intelligent, Dynamic Rate Board

Parking is a $30B industry in North America, yet the industry has changed very little in over 70 years. For example, operators typically set prices by simply surveying the price of parking in their local neighbourhood. In this talk, we will show how an interdisciplinary team at SpotHero created the industry’s first real-time rate board using information from digital sensors and parking Point of Sale (PARCS) systems. Our machine learning models optimize the price by minutes and the appropriate time bands for individual parking facilities.We will give an overview of the system, highlight some of our key learnings while collaborating with facility operators, and discuss some of the unique technical IoT and Machine Learning challenges we overcame.

Speaker's Bio:

Michael is an accomplished data scientist and thought leader with 10+ years of experience of research and has bring to market advance analytics and machine learning solutions. As a person with insatiable intellectual curiosity, he is passionate about finding actionable insights hidden in vast amounts of raw data. Michael likes data. Manipulating it, making it (simulation), modeling it, visualizing it and yes, even cleaning it. He has been extremely lucky to work in many different realms — have built gambling market trading algorithms and live spots models for online sportsbook. He had built DSaaS tools housing customer analytics and recommendation engine for enterprise customers and built quantitative portfolio construction algorithms for PE firm. Now he's working with a team of excellent engineers to help people park their cars using cellphone app.

--

Alon Halevy, Director, Facebook AI, Facebook

Obtaining Answers from Social Media Data

The key technical problems that online social networks focus on today are detecting policy violating content (e.g., hate speech, misinformation) and ranking content to satisfy their users’ needs. By nature, these problems are somewhat vague and need to handle multi-modal content in many languages, and therefore do not naturally lend themselves to AI techniques based on declarative representations and reasoning. However, the machine learning techniques that are employed also have some drawbacks, such as the fact that it is hard to update their knowledge efficiently or to explain their results. In this talk I will outline a few opportunities where methods from symbolic AI, combined appropriately into the machine learning paradigm, can ultimately have an impact on our goals. In particular, I will describe Neural Databases, a new kind of database system that leverages the strength of NLP transformers to answer database queries over text, thereby freeing us from designing and relying on a database schema.

Speaker's Bio:

Alon Halevy has been a director at Facebook AI since 2019, where he works on Human Value Alignment and on the combination of neural and symbolic techniques for data management. Prior to Facebook, Alon was the CEO of Megagon Labs (2015-2018) and led the Structured Data Research Group at Google Research (2005-2015), where the team developed WebTables and Google Fusion Tables. From 1998 to 2005 he was a professor at the University of Washington, where he founded the database group. Alon is a founder of two startups, Nimble Technology and Transformic Inc. (acquired by Google in 2005). He received his Ph.D in Computer Science from Stanford in 1993. Alon co-authored two books: The Infinite Emotions of Coffee and Principles of Data Integration. He is a Fellow of the ACM and a recipient of the PECASE award and Sloan Fellowship. Together with his co-authors, he received VLDB 10-year best paper awards for the 2008 paper on WebTables and for the 1996 paper on the Information Manifold data integration system. In 2021, he received the Edgar F. Codd SIGMOD Innovations Award.

--

Krista Henrich, AI Product Manager &  Daniel Wagner, Director of Engineering - AI, Sensibill

Finance / Data Governance - Achieving Congruence Across the Org

Sensibill is a Fintech that builds intuitive digital tools that empower users to manage their financial records. We serve some of the largest financial institutions in the world, allowing them to harness the power of SKU level data and go beyond transaction visibility. In this talk we will be covering Sensibill's journey of building a data governance framework that allows our team to work at blazing fast speed while keeping our client's data safe and respecting all our contractual engagements.

Speaker's Bio:

Krista Henrich. Krista has worked in Data, Research, Localization and as a Product Owner for Mobile SDK's and API's for a collective 7 years. Krista is excited to now be championing Artificial Intelligence Product Management at Sensibill, and on the Toronto Tech scene.

An IT veteran, Daniel started his career on mobile development 3 years before iPhones were a thing, built countless large scale e-commerce and CMS projects before diving into the world of Fintech for the last 5 years. Daniel currently leads the Data & AI team at Sensibill.

--

Oren Etzioni, Allen Institute for AI - CEO

Semantic Scholar, NLP, and the Fight against COVID-19

This talk will discuss projects focused on advancing AI for the common good, including the dramatic creation of the COVID-19 Open Research Dataset (CORD-19) and the broad range of efforts, both inside and outside of the Semantic Scholar project, to garner insights into COVID-19 and its treatment based on this data. It will also highlight key advances in NLP that have enabled this work.

Speaker's Bio:

Dr. Oren Etzioni is Chief Executive Officer at AI2. He is Professor Emeritus, University of Washington as of October 2020 and a Venture Partner at the Madrona Venture Group since 2000. His awards include Seattle’s Geek of the Year (2013), and he has founded or co-founded several companies, including Farecast (acquired by Microsoft). He has written over 100 technical papers, as well as commentary on AI for The New York Times, Wired, and Nature. He helped to pioneer meta-search, online comparison shopping, machine reading, and Open Information Extraction.

--

Carlos Guestin, Professor, Stanford University

How Can You Trust Machine Learning?

Fundamental concepts and methods to understand the predictions, evaluation and trust in ML models

Speaker's Bio:

Carlos Guestrin is a Professor in the Computer Science Department at Stanford University. His previous positions include the Amazon Professor of Machine Learning at the Computer Science & Engineering Department of the University of Washington, the Finmeccanica Associate Professor at Carnegie Mellon University, and the Senior Director of Machine Learning and AI at Apple, after the acquisition of Turi, Inc. (formerly GraphLab and Dato) — Carlos co-founded Turi, which developed a platform for developers and data scientist to build and deploy intelligent applications. He is a technical advisor for OctoML.ai. His team also released a number of popular open-source projects, including XGBoost, LIME, Apache TVM, MXNet, Turi Create, GraphLab/PowerGraph, SFrame, and GraphChi.

Carlos received the IJCAI Computers and Thought Award and the Presidential Early Career Award for Scientists and Engineers (PECASE). He is also a recipient of the ONR Young Investigator Award, NSF Career Award, Alfred P. Sloan Fellowship, and IBM Faculty Fellowship, and was named one of the 2008 ‘Brilliant 10’ by Popular Science Magazine. Carlos’ work received awards at a number of conferences and journals, including ACL, AISTATS, ICML, IPSN, JAIR, JWRPM, KDD, NeurIPS, UAI, and VLDB. He is a former member of the Information Sciences and Technology (ISAT) advisory group for DARPA.

--

Siim Sikkut, Government CIO of Estonia and Alex Benay, Government Azure Strategy, Microsoft & Former Government CIO, Canada

Exploring Opportunities and Challenges for AI in Digital Governments 

--

Nataliya Portman, Senior Data Scientist, Cineplex Digital Media

Recommendation Systems for Digital Out of Home Advertising

As a financial institution, how do you reach out to the right audiences about your products and services in Digital Out of Home (DOOH) world? What is the best content strategy to follow when distributing ads through the network of thousands of digital screens across Canada? In this talk, I will demonstrate a probabilistic modeling approach developed at the CDM and how insights derived from this recommendation model drive decision-making on content selection and content placement.

Speaker's Bio:

Nataliya received her Doctoral Degree in Applied Mathematics from the University of Waterloo in 2010, followed by postdoctoral training at the Neurological Institute in Montreal. Following her postdoctoral assignment, she developed a novel approach to brain tissue classification in early childhood brain MRIs using modern Computer Vision pattern recognition and perceptual image quality models. Nataliya has worked in many industries including biotech, materials science and automotive, and various start-up software companies. Throughout her career, she has applied her expertise in Mathematics to develop numerous models including but not limited to machine learning algorithms and computationally efficient algorithms for model validation. She is the co-inventor of “Bid-Assist”, a strategy for setting up an initial bidding amount to discourage low bidding behaviour, and “AutoVision”, a mobile app that allows automatic taking of pictures of vehicle views and damages recognized by an image classifier. Nataliya paved a new way for Data Science in incentives/rewards industry. She developed predictive analytics tools that help channel leaders maximize the return on investment of their channel incentive programs. In January 2021, Nataliya joined the Cineplex Digital Media as a Senior Data Scientist committed to the development of media content recommendation systems.

--

George Seif, Machine Learning Engineer, Altair Engineering

Cost Reduction Methods for Machine Learning in Production

MLOps has emerged as a powerful set of tools and strategies for effectively deploying Machine Learning models into production. As an ever-growing number of organizations look to deploy ML models they face an expensive question: how much is this all going to cost? While the research community is chasing the latest and greatest, both data and models are getting bigger and more compute intensive. This trend has led to a ballooning of costs, especially as more models are being deployed into the cloud. In this talk, I will share several effective methods to reduce costs when deploying Machine Learning models. Namely, we will zoom-in on two key areas for cost reduction: models and infrastructure.

Speaker's Bio:

George is a Machine Learning Engineer with expertise in bringing Machine Learning technologies to production at scale. In the past, he worked at Indus.ai (acquired by Procore Technologies) designing and building a Machine Learning System for applying Computer Vision to construction analytics. He currently works at Altair Engineering where he’s working on building an open, extensible, scalable, cloud-agnostic MLOps platform to make taking ML to production faster and easier.

--

Matthew Guzdial, Assistant Professor and CIFAR AI Chair, University of Alberta

Modeling Individuals without Data via a Secondary Task Transfer Learning Method

In most cases, deep neural networks (DNNs) require a great deal of data. There are approaches, such as zero-shot and few-shot learning, that can produce high quality DNNs with less or no data. However, these approaches still assume a large source dataset or a large secondary dataset to guide the transfer of knowledge. These are not assumptions that hold true when our goal is to model individual humans, who tend to produce much less data. In this talk we present a novel transfer learning method for producing a DNN for modeling the behaviour of a specific individual on an unseen target task, by leveraging a small dataset produced by that same individual on a secondary task. We make use of a specialized transfer learning representation and Monte Carlo Tree Search (MCTS). We demonstrate that our approach outperforms standard transfer learning approaches and other optimization methods on two human modeling domains: financial health and video game design.

Speaker's Bio:

Matthew Guzdial is an Assistant Professor in the Computing Science department of the University of Alberta and a Canada CIFAR AI Chair at the Alberta Machine Intelligence Institute (Amii). He is a recipient of an Early Career Researcher Award from NSERC, a Unity Graduate Fellowship, and two best conference paper awards. His work has been featured in the BBC, WIRED, Popular Science, and Time.

--

Ivey Chiu, AI R&D Lead, Telus

An Application of Model-Based Reinforcement Learning to Reduce Energy Consumption in TELUS data rooms: TELUS/Vector AI-for-Good Collaboration on the Development of the Energy Optimization System (EOS)

In this talk, we discuss the TELUS/Vector Institute AI-for-Good collaboration to apply model-based reinforcement learning (MBRL) to the reduction of energy consumption in TELUS data rooms. We will discuss the development of a model-based reinforcement learning algorithm we call the Energy Optimization System, (EOS) including the algorithmic innovation we term Hyperspace Neighbour Penetration (HNP) to deal with slowly changing variables, as well as the processes involved with on-site testing and the results of our recently completed pilot test. In this pilot, we found that our algorithm reduced the energy consumption of a small data room by 8.9%, including IT load, or 18.6% excluding IT load. Energy savings were calculated using industry standard International performance measurement and verification protocol (IPMVP), Option (B) Retrofit Isolation: All Parameter Measurement. Finally, we will also discuss the next steps in the EOS initiative.

Speaker's Bio:

Dr. Chiu is a Data Scientist with a wide domain of expertise in wireless networks, ecommerce and Artificial Intelligence. She has a B.A.Sc. in Manufacturing Systems Engineering from the Department of Engineering Science, and a M.A.Sc. and Ph.D. from the Department of Mechanical and Industrial Engineering, all from the University of Toronto. She held an NSERC Canada Graduate Fellowship during her Ph.D. studies and an NSERC Post-Doctoral Fellowship at Ryerson University. Her graduate and post-doctoral work focused on understanding and modeling creativity and designer behaviour in engineering design using cognitive psychology and natural language processing. Her research resulted in over 20 AI-related conference and journal papers.

Currently, she is the Applied AI R&D Lead in the TELUS Data Strategy & Enablement Team. She leads a diverse portfolio of internal and external data and AI R&D projects with partners such as the Vector Institute, the Alan Turing Institute in the UK, the GSMA and various start-ups. She is a sought-after speaker and panelist in Analytics & AI and career development, where she often speaks about how career plans need to be fluid and where she shares her perspective as a woman in STEM.

--

Doug Sherk, Senior Machine Learning Engineer II,  Axon Enterprise Inc

Scaling Deep Learning Model Training

As datasets and model sizes grow, training takes longer and longer, but scaling such systems doesn't need to be scary. Imagine if training jobs that previously took days instead took hours—and for roughly the same cost! Distributed training enables you to do this by splitting your model training across many instances in a cluster. Off-the-shelf tooling has improved over the last 1-2 years such that this process has become easy, and most of the challenges are in simply making the right design decisions up-front. It's important to also understand how research problems change when scaling up, e.g., the need to tune hyperparameters such as learning rate and batch size.

Speaker's Bio:

Doug is currently building an end-to-end ML platform currently for computer vision edge devices at Axon, maker of the Taser stun gun. Doug was formerly the CTO at Passenger AI (YC S18), which was acquired by Zippin Inc. There, he led a team of engineers developing self-driving car interior monitoring solutions that were sold to Fortune 20 companies. Doug lived in the San Francisco Bay Area for several years where he worked as a software engineer at companies like Mozilla and Zynga. MSc in Computer Science with a Machine Learning Specialization at Georgia Tech, BASc in Mechatronics Engineering at the University of Waterloo.

--

Natalie Klym, VP Market Development, Radium Cloud and David Clark, Senior Research Scientist, MIT Computer Science & Artificial Intelligence Lab

The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and Responsibility

Fireside chat with David Clark, Senior Research Scientist, MIT Computer Science & Artificial Intelligence Lab

As AI evolves, many researchers face a moral dilemma as they watch their work make its way out of the lab and into society in ways they had not imagined or, more importantly, in ways they find objectionable. As a foundational technology that has reached maturity and is fully embedded in society, the Internet offers powerful lessons about unintended consequences. In this discussion, Dr. David Clark, Senior Research Scientist at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) shares his experiences with the Internet’s transition from lab to market and explains why “the technologists are not in control of the future of technology.” Clark has been involved with the development of the Internet since the 1970s. He served as Chief Protocol Architect and chaired the Internet Activities Board throughout most of the 80s, and more recently worked on several NSF-sponsored projects on next generation Internet architecture. In his 2019 book, Designing an Internet, Clark looks at how multiple technical, economic, and social requirements shaped and continue to shape the character of the Internet.

Speakers' Bio:

Natalie Klym is VP Market Development at Radium Cloud as well as an independent researcher and consultant. She has been leading digital technology innovation programs in academic and private institutions for 25 years including at MIT, the Vector Institute, and University of Toronto. She strives for innovation that is open, creative, and responsible.

David Clark is a Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Laboratory. He is technical director of the MIT Internet Policy Research Initiative. Since the mid-70s, Clark has been leading the development of the Internet; from 1981-1989 he acted as Chief Protocol Architect in this development, and chaired the Internet Activities Board. His current research looks at re-definition of the architectural underpinnings of the Internet, and the relation of technology and architecture to economic, societal and policy considerations. He is helping the U.S. National Science foundation organize their Future Internet Architecture program. He is past chairman of the Computer Science and Telecommunications Board of the National Academies, and has contributed to a number of studies on the societal and policy impact of computer communications. He was elected to the American Academy of Arts and Sciences in 2002 and serves as a member of its Council. He is the author of Designing an Internet, MIT Press, 2018.

--

Maryam Emami, CEO- AI Materia, AI Materia

Physics-Informed Machine Learning Methods for Materials Development

Materials development revolutionizes all aspects of our lives. However, the traditional approaches for the design and manufacturing of materials are often unsustainable and resource-intensive. Limited by sparse, decentralized data, and the performance frontier of existing product lines, new materials can take decades to develop. Applications of machine learning and artificial intelligence have been a transformative paradigm for materials science. As materials informatics has matured from a niche area of research into an established discipline, distinct frontiers of this discipline have come into focus, and best practices for applying machine learning to materials are emerging. This talk will discuss case studies of how machine learning accelerates the development of higher-performing products.

Speaker's Bio:

Dr. Maryam Emami is the Chief Executive Officer at AI Materia, a materials informatics platform company. AI Materia technology helps materials companies to discover, manufacture, and design advanced materials in 80% less time compared to traditional approaches.

In addition to her role at AI Materia, Maryam works with materials and manufacturing companies as an advisor around digital transformation and innovation.

She earned her Ph.D. and M.Sc. in Chemical Engineering from McMaster University and holds a B.Sc. in Petroleum Engineering. Her primary interest is the development and validation of physics-informed machine learning methods specific to applications in advanced manufacturing. She is a designated professional engineer, P.Eng.

--

Razi Bayati, Machine Learning Engineer and Kien Ly, Senior Manager Data Science & Analytics (Kien), Rogers

How Can Machine Learning Help 5G? An Application of Using Machine Learning to Accomplish Smart 5G

5G is the solution for ultra-dense, high-mobility, and large-scale mobile networks. To achieve this, efficient algorithms and agile end-to-end optimization methods are required. Machine learning is a very promising technology that is being promoted by academia and industry for 5G enablement. The superb capability of deep learning methods in modeling large-scale systems makes it a perfect candidate for enhancing telecom-specific algorithms and solving resources allocation problems in the 5G mobile network. In Rogers, we aim to utilize state-of-the-art ML-aided methods to design smarter and faster communication for our customers. In this talk, we will briefly introduce 5G, ML-assisted solutions for enabling it, network design and planning use cases in Rogers communications. We will also mention some challenges we face in moving our models to production.

Speaker's Bio:

Razi Bayati is a machine learning engineer and 5G researcher in Enterprise Data and Analytics team at Rogers. She has a masters degree from The University of British Columbia in electrical and computer engineering. Her research area is about solving optimization problems with machine learning algorithm, in particular deep learning.

--

Qi He, Senior Director of Engineering at LinkedIn, ACM Distinguished Member, LinkedIn

Constructing a Knowledge Graph for the World's Largest Professional Network

Online social networks such as Facebook and LinkedIn have been an integrated part of people’s everyday life. To improve the user experience and power the products around the social network, Knowledge Graphs (KG) are used as a standard way to extract and organize the knowledge in the social network.

This talk focuses on how to build Knowledge Graphs for social networks by developing deep NLP and GNN models, and holistic optimization of Knowledge Graphs and the social network. Building KG for social networks poses two challenges: 1) input data for each member in the social network is noisy, implicit and in multilingual, so a deep understanding of the input data is needed; 2) KG and the social network influence each other via multiple organic feedback loops, so a holistic view on both networks is needed. In this talk, I will share the lessons we learned from tackling the above challenges in the past 9 years on building the Knowledge Graph for the LinkedIn social network. I will present how we use our KG to empower more than 20+ products at LinkedIn with high business impacts.

Speaker's Bio:

Qi He is the Sr. Director of Engineering at LinkedIn, leading a team of 150+ machine learning scientists, software engineers and linguistic specialists to standardize LinkedIn data and build the LinkedIn Knowledge Graph by creating standard entity vocabulary, recognizing entities, building entity relationships, and use this data to serve the entire LinkedIn ecosystem including member engagement and monetization products. His strengths include 1) 15+ years of experience managing and executing large complex AI projects in Knowledge Mining and Management, Recommender Systems, Information Retrieval, Language Processing with big business impact, 2) inventing and driving adoption of the state-of-the-art Deep Learning and Natural Language Processing approaches in industry, 3) building a strong organization and scaling it across geographies.

He is a Member Board of Directors for ACM CIKM and served as General Chair of CIKM 2013 and PC Chair of CIKM 2019. He serves as Associate Editor of IEEE Transactions on Knowledge and Data Engineering (TKDE) and Neurocomputing Journal. He has regularly served on the (senior) program committee of SIGKDD, SIGIR, WWW, CIKM and WSDM for 10+ years. He received the 2008 SIGKDD Best Application Paper Award and the 2020 WSDM 10-year Test of Time Award. He has ~70 publications with 6000+ citations to date. He is an ACM Distinguished Member. He is People of ACM in February, 2021 (https://www.acm.org/articles/people-of-acm/2021/qi-he).

--

Ludovic Bégué, CRM & Data Science Director, L'Oreal Canada and Mohamed Sabri, Consultant in MLOps, Rocket Science

Sharing L'Oréal Canada's Bespoke Data Science Strategy Framework

Why is it so important to have a strong AI strategy? Most AI initiatives fail, mainly because there is no strategy, no vision and no framework to support the projects. L’Oréal Canada has faced several challenges while implementing Data Science and AI practice. The goal of this session is to share L’Oréal Canada experience in the field and how as an organization we think any company should implement data science and AI.

Some of the aspects covered during the session:

  • Assessing level of maturity in data valuation and understanding the weaknesses of the organization
  • Identify projects with high value, done at L’Oréal Canada
  • How to effectively scope a project and avoid POCs?
  • How to build a ML team with the shortage of resources in the market?
  • In our opinion what are the key points in implementing a data science practice?

This session will be co-presented with the director Data & CRM at L’Oréal Canada and the Partner of Rocket Science Development. Takeaways for the audience: A reusable framework in pdf format Use cases idea to drive your AI practice Real examples of projects in Retail and E-commerce Tips and advice based on field experience

Speakers' Bio:

Senior Global CRM and Customer Experience Strategist with 20 years of success in deploying Data Science, Consumer Centricity and Omni Channel marketing within various local and international organizations.

As CRM & Data Science Director at L’Oréal Canada since 2020, Ludovic leads a team that is solely focused on building better and profitable consumer experiences by bridging data capacities with digital and marketing platforms.

Mohamed Sabri is a results-driven data science and MLOps specialist with 8 years of experience in Machine learning and deep learning, systems design concepts, understanding data architecture concepts, data modeling, project design and project implementation. This Trilingual consultant has a very wide range of project experience for various industries. Mohamed Sabri has been as well a director of data science and AI in a startup in Montréal, with his experience on the field and as a manager Mohamed is capable of supporting all organizations in their AI project implementation. Mohamed is also the author of the book "Data Scientist Pocket Guide" available on Amazon.

--

Pieter Luitjens, CTO, Private AI

Deploying Transformers at Scale; Addressing Challenges and Increasing Performance

Transformer networks have taken the NLP world by storm, powering everything from sentiment analysis to chatbots. However, the sheer size of these networks presents new challenges for deployment, such as how to provide acceptable latency and unit economics. The de-identification tasks Private AI services rely heavily on Transformer networks and involve processing large amounts of data. In this talk, I will go over the challenges we faced and how we managed to improve the latency and throughput of our Transformer networks, allowing our system to process Terabytes of data easily and cost-effectively.

Speaker's Bio:

Co-founder & CTO of Private AI. He worked on software for Mercedes-Benz and developed the first deep learning algorithms for traffic sign recognition deployed in cars made by one of the most prestigious car manufacturers in the world. He has over 10 years of engineering experience, with code deployed in multi-billion dollar industrial projects. Pieter specializes in ML edge deployment & model optimization for resource-constrained environments.

--

More to come!!

Long-form Learning Workshops:

Sowmya Vajjala, National Research Council, Canada - Researcher

NLP Without a Ready-made Labeled Dataset

NLP tutorials and workshops typically start with a labeled/annotated dataset, and discuss different ways of representing text/building models.

However, in many real-world scenarios, we don't have that luxury of already having a labeled dataset. We may often end up in scenarios where we have a problem, and a way to solve it, but no dataset to start working on the solution! In this workshop, I will introduce some ways of approaching this problem, such as looking for existing datasets, data annotation, automatic data labeling, data augmentation, and transfer learning.

Speaker's Bio:

Sowmya Vajjala currently works as a researcher in Digital Technologies at National Research Council, Canada’s largest federal research and development organization. She has worked in the area of Natural Language Processing (NLP) over the past decade in various roles – as a software developer, researcher, educator, and a senior data scientist. She recently co-authored a book: “Practical Natural Language Processing: A Comprehensive Guide to Building Real World NLP Systems”, published by O’Reilly Media (June, 2020), which was also translated into Chinese. Her research interests lie in multilingual computing and the relevance of NLP beyond research both in industry practice as well as in other disciplines, through inter-disciplinary research.

--

Mathangi Sri, Vice President Data Science, Gojek

NLP in Ecommerce

We cover the various aspects of text mining in Ecommerce. We deep dive into sentiment analysis of review mining - both supervised and unsupervised approaches. We discuss the pros and cons of both the approaches

Speaker's Bio:

Mathangi Sri has a 17+ years of proven track record in building world-class data sciences solutions and products. She has overall 20 patent grants in the area of intuitive customer experience and user profiles. She has recently published a book with Apress,Springer - “Practical Natural Language Processing with Python” She is currently heading the data organization of GoFood, Gojek. In the past she has built data science teams across large organizations like Citibank, HSBC, GE and tech startups like 247.ai, PhonePe. She is an active contributor in the Data Science community - through lectures, talks, blogs and advisory roles. She is a guest faculty in many premium academic institutes across the country like IIIT Sri City, IIM Kashipur, NIT Trichy etc. She is recognized as one of “The Phenomenal SHE” by Indian National Bar Association in 2019, top 10 data scientists by analytics india magazine in 2018, top AI leaders in India 2021 by 3AI association

--

Guy Salton, Solutions Engineering Lead, Run:AI

AI Inference Workloads: Solving MLOps Challenges in Production

MLOps hurdles don't end after models are pushed to production. In the ML lifecycle, inference workloads present a critical challenge, where throughput and latency become key measures and teams struggle to meet efficient GPU utilization levels. In this workshop applicable to both Data Scientists and ML Engineers, Guy Salton will give an overview of the challenges in moving ML prototypes to production, and how best-in-class ML teams are successfully overcoming these hurdles. We’ll discuss using fractional GPU capabilities to improve throughput and reduce latency, and we’ll show how one organization built an inference platform on top of Kubernetes with the NVIDIA A100 MIG to support their scaling AI initiatives. There are very few organizations using the new NVIDIA MIG functionality successfully, so even if you’re not using A100s yet, this is a unique opportunity to see how the MIG works for inference use cases.

Speaker's Bio:

Guy Salton is the Solutions Engineering Lead at Run:AI, specializing in the fields of DevOps, Cloud Computing, Kubernetes, Containers, Virtualization, CI/CD and AI computing. He runs POCs and technical projects for our commercial and enterprise customers, including on-site installations and workshops. Guy speaks at conferences and meetups around the world, writes blog posts and delivers webinars.

--

Ushnish Sengupta, Ph.D. Candidate, University of Toronto

Algorithmic Bias in Human Resources, Responsible AI, and Business Strategy

This session discuses Algorithmic Bias in Human Resources including AI and ML projects. It provides a convincing case that the bias and fairness issues that have been identified are grounded in business strategy decisions. Importantly, the session takes a practitioner perspective, viewing Human Resource technology project implementation from a Project Managers and middle managers point of view. The session starts by identifying the symptoms of the problem that are discussed in public press articles, and then completes a deep dive into the root causes of the issues, by unpacking layers of business and strategy decisions that have led to the issues that have been discovered after system implementation. In this session we also identify strategies and tactics and that can be used by Project Managers and middle management in developing responsible Human Resource algorithms, including AI and ML projects.

--

Jacopo Tagliabue, Coveo - Lead AI Scientist

MLOps Without Much Ops

It is indeed a wonderful time to build machine learning systems, as we don’t have much to do anymore! Thanks to a growing ecosystem of tools and shared best practices, even small teams can be incredibly productive at “reasonable scale”.

In this talk, we present our philosophy for modern, no-nonsense data pipelines, highlighting the advantages of a PaaS approach, and showing (with open, freely available code) how the entire toolchain works on real-world data with realistic constraints. We conclude with some unsolicited advice on the future of ML for “reasonable” companies, based on our experience in small and large organizations.

Speaker's Bio:

Educated in several acronyms across the globe (UNISR, SFI, MIT), Jacopo Tagliabue was co-founder and CTO of Tooso, an A.I. company in San Francisco acquired by Coveo in 2019. Jacopo is currently the Lead A.I. Scientist at Coveo, shipping models to hundreds of customers and millions of users. When not busy building products, he is exploring research topics at the intersection of language, reasoning and learning: he is a committee member for international NLP/IR workshops, and his work is often featured in the general press and A.I. venues (e.g. NAACL, SIGIR, RecSys, ACL). In previous lives, he managed to get a Ph.D., do sciency things for a pro basketball team, and simulate a pre-Columbian civilization.

--

Dr. Kirell Benzi - Data Artist | Researcher,  EPFL

AI Art Initiation Workshop

In this AI art initiation workshop, we introduce powerful yet accessible tools to be creative with machine learning. We first start by reviewing the different aspects of an AI art piece and rapidly move on to the creation and manipulation of the dataset we will use. The second, and most important, part of the workshop is dedicated to learning the basics of Cables.gl, a visual programming framework in the browser. This environment allows us to quickly iterate over different designs and opens the door to a large number of uses outside art such as motion graphics or data visualization. While no real programming experience is required, general tech-savviness is mandatory to be comfortable.

**Pre-requisites**

To apply to the workshop, you need to be comfortable with manipulating files on your system, browsing the web and be completely autonomous with your computer/laptop. You have a working microphone and high-speed internet connection. Your browser needs to support WebGL (Chrome preferred). Make sure you have the latest version and test it here: [https://get.webgl.org/](https://get.webgl.org/). A physical mouse is also preferred as we will manipulate 3D objects. A large display (or 2) is recommended to work alongside the videoconference window.

Speaker's Bio:

Dr Kirell Benzi is a data artist, public speaker and AI researcher. His work revolves around the creation of aesthetic experiences that inspire, educate and empower large audiences using state-of-the-art technology. Through a hypnotic visual semantic, he tries to demonstrate that algorithms have a soul; and that we can create positive emotions from complexity using methods that come straight from scientific research.

--

Ville Tuulos and Oleg Avdeev, CEO/Co-Founder, Outerbounds, Clive Cox, Chief Technology Officer / Alejandro Saucedo, Director of Machine Learning Engineering / Adrián González Martín, ML Engineer, Seldon

Develop and Deploy ML Projects with Metaflow and Seldon

Over the past years, a new stack of mature, open-source tools for MLOps has started emerging: Metaflow was started at Netflix to make it easy for data scientists to develop robust ML workflows which can be used to train models at scale. Seldon is powering tens of thousands of clusters for model deployments. Together, Seldon and Metaflow cover the full ML project lifecycle from prototyping to production deployments. This is a technical workshop focusing on the full stack of ML infrastructure. Learn from core developers of Seldon and Metaflow how to develop models at scale, and how to deploy them to production-grade infrastructure. We will have an open QA, so come prepared to ask questions about any aspects of ML infrastructure.

Speakers' Bio:

Ville has been developing infrastructure for machine learning for more than two decades. He has worked as an ML researcher in academia and as an infrastructure leader at a number of companies, including Netflix where he led the ML infrastructure team that created Metaflow, a popular open-source framework for data science infrastructure. He is a co-founder and CEO of Outerbounds, a company that continues the Metaflow journey. He is also the author of a new book, Effective Data Science Infrastructure, which will be published by Manning in 2021.

Clive is CTO of Seldon. Seldon helps enterprises put machine learning into production. Clive developed Seldon's open source Kubernetes based machine learning deployment platform Seldon Core. He is also a core contributor to the Kubeflow and KFServing projects.

Alejandro is the Director of Machine Learning Engineering at Seldon Technologies, where he leads large scale projects implementing open source and enterprise infrastructure for Machine Learning Orchestration and Explainability. Alejandro is also the Chief Scientist at the Institute for Ethical AI & Machine Learning, where he leads the development of industry standards on machine learning explainability, adversarial robustness and differential privacy. With over 10 years of software development experience, Alejandro has held technical leadership positions across hyper-growth scale-ups and has a strong track record building cross-functional teams of software engineers.

Linkedin: https://linkedin.com/in/axsaucedo

Twitter: https://twitter.com/axsaucedo

Github: https://github.com/axsaucedo

Website: https://ethical.institute/ "

--

Chanchal Chatterjee - Cloud AI Leader and Elvin Zhu - AI Engineer,  Google

From Concept to Production: Template for the Entire ML Journey

We created an open source template in python for the entire ML journey from concept to production. The workshop offers a 2 part hands-on tutorial. Each part will be for 2 hours.

At the end of this tutorial you will have hands-on experience building a model from concept to a final production-ready ML pipeline. The tutorial will be implemented on the Google Cloud Platform with Vertex AI. Models include xgboost and tensorflow models.

Speaker's Bio:

Chanchal Chatterjee, Ph.D, held several leadership roles in machine learning, deep learning and real-time analytics. He is currently leading Machine Learning and Artificial Intelligence at Google Cloud Platform. Previously, he was the Chief Architect of EMC CTO Office where he led end-to-end deep learning and machine learning solutions for data centers, smart buildings and smart manufacturing for leading customers. Chanchal received several awards including Outstanding paper award from IEEE Neural Network Council for adaptive learning algorithms recommended by MIT professor Marvin Minsky. Chanchal founded two tech startups between 2008-2013. Chanchal has 29 granted or pending patents, and over 30 publications. Chanchal received M.S. and Ph.D. degrees in Electrical and Computer Engineering from Purdue University.

--

Dr. Deirdre Kelly, Leadership Specialist, NAV Canada

Moving Beyond Average: Adopting Inclusive Design into Business Practices, Policies, and Systems

For practical and pragmatic reasons, the world is often designed for the "average" user. As a result, designers and developers across a variety of disciplines, including Artificial Intelligence (AI) and Machine Learning (ML), often make decisions that exclude particular audiences in their services and products.

This workshop was made for AI and ML experts, businesses, and other researchers and practitioners who are looking to move beyond designing for the average.  This workshop is intended for those who are instead interested in narrowing design gaps through integration of inclusive design into the everyday of business.

This workshop will:

1) Challenge participants to identify the biases, systems, and other barriers that lead to narrow or exclusive design;

2) Encourage participants to consider how these barriers present within their own business and how these barriers impact their products and services;

3) Equip participants with the knowledge, skills, and tools to build inclusive design into how they do business; and,

4) Provide concrete success metrics to provide accountability and help keep teams on track.

As research in cognition and neuroscience grows, we better understand the many different ways in which people learn, think, and interact with the world. This improved understanding of the diversity of human cognition provides concrete evidence of the limitations of designing for the average user and points to how this decision can have the consequence of leaving large portions of the population under-represented and under-served.

At a time when we are increasingly acknowledging the value of including a diversity of voices, companies need to look for ways to incorporate inclusive design into their business practices. By making the decision to continuously incorporate diverse user feedback into research, development and implementation cycles, companies can better ensure that products and services empower participation and inclusion.

Inclusive by default may be the desired end-state but it is a goal that can only be realized by continuous dedication, intentionality, and years of inclusive design practice. By participating in this workshop, you will be making a commitment to, and a step forward on, your journey towards increased inclusion.

Speaker's Bio:

Dr. Deirdre Kelly has applied her expertise in cognitive science, decision-making and user experience design to help people, teams, and organizations find novel and innovative solutions to complex design challenges. She is currently leading research and development in the design of healthy and effective organizations and organizational culture. Dr. Kelly is an accomplished writer and speaker that has presented at both international and domestic venues. In her free time, Dr. Kelly is dedicated to serving her community through research and advocacy. She is currently working with YouTube activist Jessica McCabe from "How to ADHD" to develop resources that empower effective decision-making for people with Attention Deficit Hyperactivity Disorder (ADHD).

Dr. Kelly has a strong commitment to life-work balance. She stays grounded with a regular yoga and meditation practice and by getting out into the woods. She enjoys writing satire, poetry, and satirical poetry as well as reading all of the time and all of the things. Dr. Kelly is grateful to share her life with a supportive partner, a furry companion named Riley, and friends who she thinks are some of the best people out there.

--

Sam Lightstone, Software Engineer, Facebook

DynaTask, A New Open Source Approach for AI Benchmarking

In this session we'll introduce and demo a new open source paradigm for AI benchmarking, called DynaTask. This novel benchmark suite uses dynamic adversarial data collection to evaluate AI models, and assess how easily an AI can be fooled by humans.

Speaker's Bio:

Sam Lightstone recently joined the technical leadership at Facebook. From 2020-2021 Sam was IBM Chief Technology Officer for AI. From 2017-2020 Sam was the IBM Chief Technology Officer for Data focusing on IBM’s database and big data portfolio. He is cofounder of the IEEE Workgroup on Self-Managing Database Systems. Sam has more than 65 patents issued and pending and has authored 4 books and over 30 papers. Sam’s books have been translated into Chinese, Japanese and Spanish. In his spare time he is an avid guitar player and fencer. His Twitter handle is "samlightstone".

--

Shreya Shankar - Ph.D. Student, UC Berkeley

Towards Observability for Machine Learning Pipelines

Software organizations are increasingly incorporating machine learning (ML) into their product offerings, driving a need for new data management tools. Many of these tools facilitate the initial development and deployment of ML applications, contributing to a crowded landscape of disconnected solutions targeted at different stages, or components, of the ML lifecycle. A lack of end-to-end ML pipeline visibility makes it hard to address any issues that may arise after a production deployment, such as unexpected output values or lower-quality predictions.

In this talk, we propose a system that wraps around existing tools in the ML development stack and offers end-to-end observability. We introduce our prototype and our vision for mltrace, a platform-agnostic system that provides observability to ML practitioners by (1) executing predefined tests and monitoring ML-specific metrics at component runtime, (2) tracking end-to-end data flow, and (3) allowing users to ask arbitrary post-hoc questions about pipeline health.

Speaker's Bio:

Shreya Shankar is a computer scientist living in the Bay Area and building systems to operationalize machine learning (ML) workflows. Her research focuses on end-to-end observability for ML systems, particularly in the context of heterogeneous stacks of tools. Currently, she is taking her Ph.D. in the RISE lab at UC Berkeley. Previously, she was the first ML engineer at Viaduct, did research at Google Brain, and obtained her BS and MS in computer science from Stanford.

--

Mark McQuade, ML Success & Business Development Lead & Philipp Schmid, Machine Learning Engineer and Tech Lead, Hugging Face

Train your first NLP Transformer Model with Amazon SageMaker and Hugging Face

Getting Started & Going to Production with Hugging Face and Amazon SageMaker

Speaker's Bio:

Philipp Schmid is a Machine Learning Engineer and Tech Lead at Hugging Face, where he leads the collaboration with the Amazon SageMaker team. He is passionate about democratizing and productionizing cutting-edge NLP models and improving the ease of use for Deep Learning.

Mark McQuade is an AWS and cloud-based solution specialist, knowledge addict, relationship builder and ML Success and Business Development Lead at Hugging Face. Every day, Mark gets to learn more about what he is passionate about professionally – AI and machine learning – as well as the fascinating world of data. As a technology evangelist, you’ll often find Mark promoting data and AI/ML at talks, webinars, podcasts and industry events.

--

Rahul Ghosh, Vice President and  Himanshu Sharad Bhatt, Research Director, American Express AI Labs

Unlocking the Potential of Unstructured Data in Finance Through Document Intelligence

According to projections, 80% of worldwide data will be unstructured by 2025. Financial Services (FS) industry is no different, where most enterprises hold vast array of unstructured data which is largely under-analysed. Typically, unstructured data refers to information that is not organized in a pre-defined manner or does not have a pre-defined data model. Here, the data is more challenging to interpret but can deliver a more comprehensive and holistic understanding for financial services use-cases. In this talk, we focus on unstructured documents in Finance and how Document Intelligence i.e., AI powered automated analysis of documents, allows to tap into the opportunities by analyzing huge amount of information present in such unstructured documents.

In financial services, unstructured documents include financial statements, invoices, bank statements, policies, contracts, marketing creatives etc. Data residing in such documents can be of variety types including images, tables, figures, and text. While there are challenges around processing documents, ability to quickly make decisions by leveraging such data can provide differentiated value propositions and competitive benefits. These benefits include improved operational excellence, automated compliance, or regulatory workflows, discovered insights from mining/ matching disparate data sources and overall enhanced customer experience. However, the very nature of unstructured data prohibits the direct application of AI/ML techniques that can be seamlessly applied on the structured data.

This talk will present the arts and sciences behind developing Document Intelligence solutions covering select use cases involving unstructured documents, show the business opportunities present and describe the technical challenges involved. Subsequently, we provide an outline to develop various Document Intelligence solutions that can aggregate, query, analyse, and accelerate the understanding of such data to unveil deep insights across Financial Services use-cases.

Speaker's Bio:

Rahul Ghosh is a Vice President at American Express AI Labs. He received his MS and PhD from Duke University, NC, USA. In his current role at Amex, Rahul is responsible for the R&D function of the lab cutting across several areas of NLP, document AI, ML algorithms and optimization. His research interests include AI systems, Cloud AI and in general at the intersection of distributed systems and AI. Prior to joining in Amex, Rahul worked at Xerox Research India, at IBM Product Group in NC and at IBM T. J.Watson Research Center in NY. He is a co-author of 35+ peer reviewed papers and co-inventor of 50 US patents (granted/pending).

Himanshu Sharad Bhatt is currently a Research Director at American Express AI Labs where he is actively involved in developing Document AI-based solutions for unstructured data analytics. Prior to joining Amex in 2017, Himanshu has worked with Xerox Research, India towards building “unstructured data analytics" capabilities for contact centers and services division. Himanshu holds a PhD degree in Computer Science & Engineering where his thesis was acknowledged with the “best thesis award" by INAE and IUPRAI in 2014. Over the years, his work has led to 30+ publications in reputed conferences and journals and 5 US patent to his credit.

--

Leonardo De Marchi, Head of Data Science and Analytics, Financial start-up - Stealth

Modern NLP: Learning to Apply Real Use Cases

Extracting knowledge from text data has always been one of the most researched topics in machine learning, but only recently have we witnessed breakthroughs that put NLP in the spotlight. Much information is stored in unstructured data, like text, which is extremely important in many different fields, from finance to social media and e-commerce. In this course, we will go through Natural Language Processing fundamentals, such as pre-processing techniques, embedding, and more. It will be followed by practical coding examples, in python, to teach how to apply the theory to real use cases. The goal of this workshop is to provide the attendees all the basic tools and knowledge they need to solve real problems and understand the most recent and advanced NLP topics.

Speaker's Bio:

Leonardo De Marchi holds a Master in Artificial intelligence and has worked as a Data Scientist in the sports world, with clients such as the New York Knicks and Manchester United, and with large social networks, like Justgiving.

He provides consultancy and training for small and large companies. His previous experience includes Head of Data Science and Analytics in Bumble, the largest dating site with over 500 million users, heading the team through acquisition and an IPO. He is also the lead instructor at ideai.io, a company specialized in Reinforcement Learning, Deep Learning and Machine Learning training. More details on the workshops can be found here.

He is also a contractor for several companies and for the European Commission, as an expert in AI and Machine Learning. As an author he wrote “Hands On Deep Learning” and he authored an online training course for O’Reilly, Introduction to Reinforcement Learning.

In the academic world, he also helped set up the PhD centre on Interactive Artificial Intelligence and will take part in the Inner Assessment Board to assign funding to Irish research in AI.

--

Stefanie Molin, Data Scientist / Software Engineer | Author of Hands-On Data Analysis with Pandas, Bloomberg

Introduction to Data Analysis Using Pandas

Working with data can be challenging: it often doesn’t come in the best format for analysis, and understanding it well enough to extract insights requires both time and the skills to filter, aggregate, reshape, and visualize it. This session will equip you with the knowledge you need to effectively use pandas – a powerful library for data analysis in Python – to make this process easier. Pandas makes it possible to work with tabular data and perform all parts of the analysis from collection and manipulation through aggregation and visualization. While most of this session focuses on pandas, during our discussion of visualization, we will also introduce at a high level matplotlib (the library that pandas uses for its visualization features, which when used directly makes it possible to create custom layouts, add annotations, etc.) and seaborn (another plotting library, which features additional plot types and the ability to visualize long-format data).

Speaker's Bio:

Stefanie Molin is a data scientist and software engineer at Bloomberg in New York City, where she tackles tough problems in information security, particularly those revolving around anomaly detection, building tools for gathering data, and knowledge sharing. She is also the author of “Hands-On Data Analysis with Pandas,” which is currently on in its second edition. She holds a bachelor’s of science degree in operations research from Columbia University's Fu Foundation School of Engineering and Applied Science. She is currently pursuing a master’s degree in computer science, with a specialization in machine learning, from Georgia Tech. In her free time, she enjoys traveling the world, inventing new recipes, and learning new languages spoken among both people and computers.

--

Ramy Nassar, Managing Partner,  1000 days Out

A Human-Centred Design Approach to AI & ML

This highly-interactive workshop gives participants a deep dive into AI, machine learning, and emerging technology, through the lens of human-centered design. This approach provides individuals & teams a repeatable and scalable toolkit that can be applied to building better AI-enabled solutions. The program, delivered by the author of the upcoming AI Product Design Handbook, is intended for those involved with bringing disruptive digital products & services to market. Participants will walk away with a set of practical tools, methods & frameworks focused on applying these technologies to customer and organizational problems.

Speaker's Bio:

Ramy is the founder of 1000 Days Out and author of the upcoming AI Product Design Handbook. As the former Managing Director of Design & Strategy for Architech and Head of Innovation for Mattel, he leads diverse teams in the creation of disruptive new digital products, services & platforms.

Ramy and his team at 1000 Days Out work across a wide range of industries with clients including Cadillac Fairview, Apple, Air Canada, Facebook, New Balance, Telus and CIBC.

Ramy teaches Design Thinking at McMaster University and in the Master's of Engineering, Innovation & Entrepreneurship program at Ryerson University. Ramy is a regular speaker at international events including World Usability Congress, IxDA, FITC, AI Everything, Machine Learning Exchange, AI Business Summit and World Mobile Congress.

--

Sophie Dionnet, General Manager, Business Solutions, Arize AI

Using AI Governance to Safely Scale AI With Speed

Imagine a steady stream of insights to fuel business decisions, 360-degree customer views to boost relevance and revenue for faster, smarter recommendations to accelerate innovation. In reality, AI models that come fragmented, undocumented, and ungoverned can increase the risk of exposing your business to operational, reputational, and legal hazards. This is why reimagining how you govern AI projects and models is imperative to ensure transparency, trust, and control to scale your business at speed with AI. In this session, Sophie Dionnet, General Manager, Business Solutions at Dataiku will show you how to scale at speed while taking control of AI.

Speaker's Bio:

Sophie is the General Manager of Dataiku's Business Solutions team. She is focused on the development of industry-specific offerings as well as offerings related to AI governance. Sophie joined Dataiku in 2019 after 14 years in the financial industry, where she was Chief Operating Officer of a multi-asset management unit at a leading asset management firm. Sophie draws on extensive experience in strategic business leadership, management of complex transformation projects (IT, regulatory, risk management, operational) and development of new businesses, particularly in responsible investment.

--

Jonathan Quimet, Sales Engineer, ModelOp

Managing AI/ML Model Risk

In this session we will explore the differences between ModelOps and MLOps, and what this means to your ability to manage model risk. As more and more models are deployed in the business environment, of both traditional and AI/ML models, the risk to your business increases. Whether the risk is direct financial impact to your business, or additional scrutiny by governing agencies, we will explore how a robust ModelOp solution can help you reduce that risk and enable your business to be successful.

Speaker's Bio:

At ModelOp, Jon works with F500 companies to understand their challenges with AI and decision-making model operations (ModelOps) and demonstrate how ModelOp Center helps solve their ModelOp technical, operational, and business challenges. Jon has 15 years of experience as a technologist and sales engineer. He has worked with AI technology products for the last four years, complementing his many years of experience with digital business automation products.

--

Aparna Dhinakaran, Chief Product Officer,  Arize AI

Grassroots Responsible AI: Operationalizing AI Ethics From The Inside Out

In this workshop, Aparna will present a talk on ho w to operationalize responsible AI using machine learning observability techniques: notably with explainability - she will look at how to use statistical distance checks to monitor features and model output in production, how to analyze the effects of the changes on models and how to use explainability techniques to determine if issues are model or data related. The first step in better understanding how to manage responsible AI is model transparency. Understand and explain how models arrive at specific outcomes for any cohort of predictions and implement ML Observability into your ML workflow. Secondly, this workshop will include a panel discussion. Engage with unique perspectives on how to approach responsible AI from an operational standpoint. Join major thought leaders championing responsible, fair, and ethical AI. Armed with a diverse array of educational and professional backgrounds, this discussion will uncover actionable insights into how to approach responsible AI from the ground up.

Speaker's Bio:

Aparna Dhinakaran is Chief Product Officer at Arize AI; a startup focused on ML Observability. She was previously an ML engineer at Uber, Apple, and Tubemogul (acquired by Adobe). During her time at Uber, she built several core ML Infrastructure platforms, including Michaelangelo. She has a bachelor’s from Berkeley's Electrical Engineering and Computer Science program, where she published research with Berkeley's AI Research group. She is on a leave of absence from the Computer Vision Ph.D. program at Cornell University.

--

Ram Seshadri, Google - Machine Learning Program Manager

Deep AutoViML For Tensorflow Models and MLOps Workflows

Deep AutoViML is a powerful new deep learning library with a very simple design goal: Make it as easy as possible for novices and experts alike to experiment with and build tensorflow.keras preprocessing pipelines and models in as few lines of code as possible.

Deep AutoViML will enable data scientists, ML engineers and data engineers to fast prototype tensorflow models and data pipelines for MLOps workflows using the latest TF 2.4+ and keras preprocessing layers. You can now upload your saved model to any Cloud provider and make predictions out of the box since all the data preprocessing layers are attached to the model itself!

In this webinar, we will discuss the problems that Deep AutoViML can solve, its architecture design and demo how to build powerful TF.Keras models on structured data, NLP and Image data domains.

--

Azin Asgarian, Applied Research Scientist (Azin),  Kyryl Truskovskyi, ML Engineer, Christopher Tee, Software Engineer, Georgian

10X Faster Machine Learning from R&D to Production

In recent years, we have seen astonishing leaps in the application of machine learning in various industries. However, as the complexity of machine learning models and the size of the datasets increase, experimentation with these models and productionizing them also become more complex and time-consuming! To overcome these challenges and facilitate the adoption of these models in the industry, various solutions are proposed by the ML community over the last few years. In this workshop, we walk you through some of these solutions and show you useful practices to overcome the aforementioned challenges! More specifically, we show you how you can supercharge your machine learning experimentation pipeline with tools like PyTorch Lightning and DVC and make your path to production smoother and faster using Kubeflow and its add-ons.

Speaker's Bio:

Azin Asgarian is currently an applied research scientist on Georgian’s R&D team where she works with companies to help adopt applied research techniques to overcome business challenges. Prior to joining Georgian, Azin was a research assistant at the University of Toronto and part of the Computer Vision Group where she was working on the intersection of Machine Learning, Transfer Learning, and Computer Vision.

--

Sedef Akinli Kocak, Project Manager, Vector Institute; Ali Pesaranghade, AI Research Scientist, LG Toronto AI Lab; Mehdi Ataei, Researcher, Autodesk

Data Shift and Model Adaptation in Machine Learning

Machine learning models are conventionally trained under the premise that the training and the real-world (i.e., both source and target) data are sampled from the same distribution. This assumption may potentially lead to predictive problems in dynamic environments where the distribution of data changes over time. This is known as dataset shift. In most real-world situations, machine learning models have to cope with dataset shift after deployment. The shift in the distribution could be dramatic for unexpected reasons, e.g., the breakout of COVID-19 pandemic or cyber attacks.

This tutorial will present:

  • The principles behind data shift.
  • Strategies for detecting dataset shift.
  • Adaptation techniques.
  • Advanced topics in data shift and hands-on practice

Speakers' Bio:

Sedef Akinli Kocak (co-presenter) is an academic industry R&D partnership and project manager in the area of AI/ML and is an accomplished researcher in the area of ICT for Sustainability and Advance Analytics. She has a Ph.D. in Environmental Applied Science and Management from Data Science Lab at Ryerson University. She is currently with the Vector Institute as an AI Project Manager. She is also a part-time lecturer and supervisor in the Data Science and Analytics Program at Ryerson University.

------

Ali Pesaranghader (co-presenter) is an AI Research Scientist at LG Toronto AI Lab, and a former Sr. Research Scientist at the Canadian Imperial Bank of Commerce (CIBC) with primary research interests in adaptive learning, data stream mining, natural language processing, and transfer learning. Ali obtained his Ph.D. in Computer Science with a focus on Adaptive Machine Learning at the University of Ottawa in 2018.

------

Mehdi Ataei (co-presenter) is a research affiliate at the Vector Institute with a Ph.D. in Computational Physics from University of Toronto. He is currently a researcher at Autodesk’s Simulation, Optimization, and Systems group. His current research is focused on computational physics, applied mathematics, topology optimization, and machine learning.

--

More to come!!

Share with friends

Date and time

Location

Online event

Refund policy

Refunds up to 30 days before event

{ _('Organizer Image')}

Organizer Toronto Machine Learning Society (TMLS)

Organizer of Toronto Machine Learning Society (TMLS) : 2021 Annual Virtual Conference

TMLS events bring together business leaders, researchers and applied ML practitioners.

TMLS is a community of over 5,000 practitioners, researchers, entrepreneurs and executives. We work to highlight global opportunities and foster growth in local ecosystems.

Save This Event

Event Saved