General
Highlights From TransformX 2021 Conference
by Alexandr Wang on October 15th, 2021
That's a wrap for TransformX 2021 Conference! Over 23,000 registrants attended 60 sessions from 100 of the world's top leaders, researchers, and practitioners of AI and Machine Learning. Here are the top highlights from some of our favorite sessions.
AI is Key to New Scientific Advances
One of the key themes we heard from multiple speakers is that AI and specifically Deep Learning (DL) is increasingly becoming an important tool for pressing and long-standing problems like combating disease or climate change, amongst many other examples. As Kevin Scott, CTO of Microsoft, observed, ‘Deep learning is a new analysis tool for scientific problems’. This is especially important for drug discovery where AI is being used to reverse a decades-long decrease in the pace of research advances, according to Daphne Koller, CEO of insitro. She went on to share how today’s AI is helping perform complex tasks, '[…] at a level that is at or beyond that of a human being’. Daphne offered one example where AI is used to classify diseased cells, ‘Human biologists cannot distinguish NASH (a liver disease) from control by eye, but a machine learning model was able to very nicely separate them’.
- Kevin Scott, CTO of Microsoft
The use of AI to address age-old problems appears to be a theme across industries. For example, Martin Chavez, Vice Chairman at Sixth Street Partners and former CIO and CFO of Goldman Sachs, noted that options pricing has for many years relied on the Black–Scholes model but now AI has been used to produce a better one, and he says ‘this model is unbelievable!'.
- Martin Chavez, Vice Chairman at Sixth Street Partners and former CIO and CFO of Goldman Sachs
Clement Delange, CEO of Hugging Face, connected recent advances in Transformer models as key to ensuring DL can help us tackle ‘a lot of society’s problems from climate change, vaccines [and] the toxicity of public platforms’. Clement went as far as to issue his own call to action, 'That’s why every single company needs to be intentional about driving more social impact with transformers'.
AI Will Be a Key Business Enabler and Differentiator (If It Isn’t Already)
As Bret Taylor, President and COO of Salesforce, plainly stated in his fireside chat with Scale CEO Alexandr Wang, ‘Every company needs to be an AI company'. Jerry Yang, Founding Partner of AME Cloud Ventures and co-founder of Yahoo echoed a similar sentiment, sharing that, ‘So the good news about data as a business model data is that every Fortune 500 now has a Chief Data Officer. Just about every Fortune 500 CEOI talked to is talking about how do I use AI and how to use data strategy to change their business’.
- Bret Taylor, President and COO of Salesforce
Explaining why this is the case, Eric Schmidt, Co-Founder of Schmidt Futures and former CEO of Google rhetorically asked, ’Why have the consumer tech companies gotten so excited about this [AI]? Because it directly improves customer quality, directly improves customer revenue, and it also allows individual targeting to the person or the cohort that person appears to be part of’.
Make It Personal, Say eBay and Netflix
Enterprises are finding that improving customer experiences with AI is incredibly valuable. As Selcuk Kopru, Head of ML and NLP at eBay noted, ‘In order to build a marketplace that scales well performs well, and that is being used by millions of users every day throughout the world. You have to, productionalize a lot of AI’. This ‘productionalization’ of AI is also what helped enterprises like Netflix and Spotify build high-quality, personal and intimate customer experiences. However, that isn’t a trivial exercise, ‘Personalization is a really hard problem. It fundamentally deals with people, and understanding and trying to predict what people want is extremely hard. Just think about yourself. Sometimes it’s hard to predict what you want, and you know the most about yourself’, said Justin Basilico, Director, ML & Recommendations Engineering, at Netflix.
Spotify Built '365 Million Different Experiences'
Enterprises succeed when they make it a fundamental tenet to differentiate their customer experiences with AI - and make each user feel like the experience is unique and personal to them. As Oskar Staal, VP of Personalization at Spotify said, ‘Today, matching content and users is at the core of Spotify’. Oskar also echoed Justin’s thoughts by explaining the scale of the personalization problem at Spotify, ‘On one side of the table, there are 365 million users, and on the other side, you could think about 70 million tracks or 2.9 million podcasts. But there isn't really just one experience. We think about it as 365 million different experiences. One experience for each user. And it would take a person 1,475 years to go through all of those 200 petabytes of content. We're also recording around half a trillion of events every day. You could see how this is a good place for machine learning’.
- Oskar Staal, VP of Personalization at Spotify
Eric Schmidt urged enterprises to build their own transformative customer experiences that change long-established ways of doing things, ‘The simplest way to think about it is if you're not offering the equivalent of an Android, an iPhone app to do anything, what's wrong with you? Today someone your age and your generation says, "Why do I have to carry a passport? Why do I have to go to a bank? Why do I have a vaccination card?" These all seem dinosaur strategies, and yet they made sense 20 years ago’.
Will You Be the Disrupter, or Disrupted?
Eric went as far to warn industries that are lightly or unregulated as being the most susceptible to disruption by those using AI to differentiate themselves, ‘Those industries will fairly quickly be disrupted by these [AI] techniques because either an existing company or a new company will adopt them, and they really will solve the problem better, and that will then create a crisis for the number two, number three, number four. And that's called capitalism'.
- Eric Schmidt, Co-Founder of Schmidt Futures and former CEO of Google
A fundamental problem that many enterprises face is that they are not often prepared with the skills, tools, and processes needed to manage data in a way that helps them train valuable AI models. Bret Taylor, observed, ‘If data and intelligence isn’t a department at your company, you’re not really a data culture. What a data culture means is that everyone is making data-informed decisions. In fact, he warned, ‘If you are not making those foundational investments now, creating a data culture company, building a digital customer experience, building a single source of truth, breaking down those data silos now, you’ll be caught flat-footed’.
If You Don't Have a Digital Business, You Don’t Have a Business
Many enterprises have already undertaken some form of digital transformation - which places them in a better position to use AI. As Bret said, ‘The by-product of every digital interaction is more data, and the more data you can use to inform those digital interactions, the more intelligent and personalized they can be’. Especially in this digital work-anywhere COVID age, enterprises are ‘trying to find their way back to growth’. ‘Disproportionality that’s about a new customer experience’, Bret offered. Thankfully, the barrier to entry in AI has decreased, and it’s now feasible for many more enterprises to build, ‘the same degree of personalization that all of those other big technology companies have’. Building AI on a strong data culture is a business imperative because, ‘if you don’t have a digital business, you don’t have a business’, Bret warned.
If You Wish to Survive, Build Intelligent APIs
How do enterprises without large data science skills or experience begin to take advantage of AI? Bret says, ‘Thanks to companies like Scale, a lot of SaaS tools are building intelligent capabilities into their products[…] that’s a path for a lot of companies to get value from AI without becoming AI shops’. Martin Chavez described that we should all be in the business of producing or consuming APIs, especially intelligent AI-driven APIs, ‘And if you wish to survive, you better get really good at packaging your products and services in an API, and you better be a world-class producer of some set of APIs that are connected to what is core and differentiating about your business.
For those that are able to invest in the right skills to use AI, Jeff Wilke, Chairman and co-founder of Re:Build Manufacturing and retired CEO of Amazon's Worldwide Consumer advised enterprises to be embedding data science at every level of the organization, ‘[embed] data scientists, computer scientists, people that really know AI and making sure that at least one of them sits at the leadership team level’.
- Jeff Wilke, Chairman and co-founder of Re:Build Manufacturing and retired CEO of Amazon's Worldwide Consumer
To Survive, Treat Your AI Professionals Like Partners Not Providers
On the topic of embedding skills into your organization the right way, Martin Chavez explained that many enterprises do it wrong, ‘[…] people get this wrong all the time, right? And it's a failure mode. Let me acquire painfully, inexpensively, some machine learning engineers, and maybe they will sprinkle machine learning everywhere and we'll be good”. No, no, no!’ He described the strategy that worked for him at Goldman Sachs, where AI experts worked directly with domain subject-matter experts, ‘You've got to take people with the data science skillset and put them next to whoever your professionals are. In one business, it might be doctors or medical scientists and in another business, it might be salespeople, and yet another it could be bankers, or it could be traders, or it could be portfolio managers. And they need to work side by side as equal partners in the business’. At the very least, Martin describes that AI-skilled employees should be treated as, ‘partners in the business, not as service providers to the business’.
Good AI Needs Good Training Data and Good Training Data Needs Humans-in-the-Loop
Dr. Andrew Ng, Founder of DeepLearning.AI and Founder and CEO of Landing AI, started his keynote by laying out the case for data-centric AI development, ‘And I think it's time for us to shift to the data-centric approach to AI, in which you can even hold the code fixed, but instead find systematic tools and methodologies and principles and algorithms to systematically engineer the data so that when the data is trained on the code, it gives you the performance you need’. In short, ‘iterate more on the data, and less on the model’. To give a real-world example, he highlighted the case of a steel plant where a model was used to identify defects with ~70% accuracy. By investing time and effort in the training data, as opposed to focussing on the model, his team was able to improve the accuracy to 90%.
Your (Non-AI) Domain Experts Are Valuable - Use Them
As if that wasn’t compelling enough, Dr. Ng called out another advantage of data-centric AI - it’s easier to involve domain experts, even if they are not AI-experienced. By having domain experts ensure that the training data is well labeled, they were able to contribute to the model’s success; ‘[…] but taking a data-centric approach, it puts all of us as AI practitioners in a better position to empower even non-AI specialists, such as the staff in a steel manufacturing plant, to engineer the data systematically to feed to the algorithm and that results in a much bigger performance improvement’.
- Dr. Andrew Ng, Founder of DeepLearning.AI and Founder and CEO of Landing AI
Pay Close Attention to the Long Tail
Dimitri Dolgov, co-CEO of Waymo observed that the increasing size and complexity of models, require thoughtfully curated high-quality training data, ‘As the models get bigger you need more data, a lot of it was supervised ML with human labeling’. As models and datasets scale, ‘a human looking at the long tail helps you out tremendously’. In fact, he highlights that you must constantly mine for the long-tail scenarios where your model is performing badly so that you can improve it through data-centric development. ‘All of those [long tail] examples tend to be very interesting and very informative. We pay close attention to them. Whenever we find them, it’s part of the data-mining strategy - you bring them into your data sets’, said Dimitri. His advice to other enterprises is to be intentional about understanding all of the scenarios their models will face, ‘You need to be bold to evaluate the performance of your system. You need to invest in data mining to find interesting examples that are representative of that [long] tail part of the distribution’.
- Dimitri Dolgov, co-CEO of Waymo
When it comes to developing models for Autonomous Vehicles (AVs) at Waymo, Dimitri shared a few examples from his own ‘long-tail, ‘And in terms of really rare stuff, we see a lot, things like a drunk cyclist weaving through traffic with a stop sign on his back. Halloween is always a good source of interesting data. You see people wearing Halloween costumes, witches, ghosts, spiders, dinosaurs, all kinds of animals, animals on the road, horses, other animals doing animal things. We recently saw a Bubble Truck. It's a truck that drives around making bubbles…’. So in short, when you build AI models for the real world, you have to know how they will perform across all scenarios - even those that occur very rarely and infrequently.
Debug Your Data and Automatate the Easy Stuff
Dimitri’s advice to other enterprises building their own AI was to use as much automation as they can to speed up the way they ‘debug’ their data, ‘Your data iteration has to be as fast and efficient as possible. The same goes for your ML infrastructure. So your talented people, your research engineers can work on other things’.
Richard Socher - 'You Will Need to Have Humans in the Loop'
Richard Socher, CEO of You.com and the fifth most-cited researcher in NLP, described the importance of humans-in-the-loop when labeling your data for AI, with this example of training a chatbot, ‘Both the running experiments, collecting data, labeling data, helping you understand problems in the data, biases that you may have in your data sets, and issues you have dealing with distribution shifts where over time new things happen. You release a new product, your chatbot doesn't yet know how to respond to those kinds of questions, and having continuous integration tests because as you automate more and more harder and harder intellectual tasks that are still somewhat repetitive, you will need to have humans in the loop[…]’.
- Richard Socher, CEO of You.com
AI is Enabling New Ways for Robots to Perceive and Interact with the Physical World
Dr Fei-Fei Li, Sequoia Professor of Computer Science @ Stanford University used a historical perspective of evolutionary biology to demonstrate how our intelligence is linked to the ability to see or perceive the world around us. This is particularly important as we conduct research to help machines perceive the world, and then interact with it. Those machines could be manufacturing robots, autonomous vehicles or even something that loads your dishwasher. As Dr Li shared, ‘Vision is a cornerstone of human intelligence, whether biological or artificial’. Dr Li also demonstrated how robots can learn to perform complex ‘long-horizon tasks’ such as clearing a table or loading a dishwasher. In a fascinating segment, she showed how robots could learn in much the same way that a young child might - through experimentation and play. As she says, ‘’Moving around in the world is both explorative and exploitative…this helps the [robotic] agent generalize’.
- Dr Fei-Fei Li, Sequoia Professor of Computer Science @ Stanford University
From ‘Blind’ Robots, to Robots That Perceive, Reason and Act
Marc Segura, Group Senior Vice President Managing Director Consumer Segments & Service Robotics, illustrated how robots can grow beyond the ‘blind’ ones which perform static and repetitive tasks like on a production line, where the placement of materials and objects are all pre-programmed and there is no need for perception, reasoning or planning. He highlighted that true robot ‘skills’ actually require a sequence of ‘actions and decisions’. For example, ‘[…] if you want to pick something from a box and put it into another box [i.e. for eCommerce order fulfillment], you need to break it down. First thing is you need to localize the box. That could be one skill. Then you need to localize the object inside the box, segmented out from the rest, and then you need to decide how to pick it, actually pick it and then comes a dropping process’.
Cobots - Robots That Learn to Work With You
Here he differentiates how AI is enabling advances over prior ‘blind’ robots, ‘What AI is bringing is a great possibility to develop robot skills that are learning over time’. In fact, this can now give us cobots i.e. robots that work collaboratively with humans. Marc described a possible cobot scenario, ‘We're going to have more and more mixed work in between human operators and robots. So if the robots get to know and can measure, for example, the average stack time of a person, or if a person is behaving in a certain way, and they can adapt to the person and optimize the process, this is really something’. As Marc says, ‘You don't need to separate robots and people anymore’.
- Marc Segura, Group Senior Vice President Managing Director Consumer Segments & Service Robotics
In another example of how robots can change the way we think about work, Marc described how robots could be used in a ‘low batch automation’, such as you might find in ‘a laboratory in a hospital’ where there may be many tasks with low numbers of repetition. An example might be loading different types of test samples into different testing machines - each one is a slightly different task that may only have to be repeated 10 or 20 times. Rather than have a robot dedicated to a single task, it could move between workstations to perform different tasks as needed. Then a (human) lab manager could simply direct robots between workstations to wherever there was a queue of work to be done. The kind of AI advances required to make this vision a reality, bears a striking resemblance to the learn-through-play-and-exploration for long-horizon tasks, described by Dr. Fei-Fei Li.
Explainable AI is Hard (And That’s OK)
For many industries, it’s important to not just build a high-performing AI model, but also to be able to explain why that model behaves as it does. This is especially true for highly regulated enterprises where regulators require controls that mitigate bias and discrimination. However, according to Ilya Sutskever, Co-Founder and Chief Scientist at Open AI, ‘The difficulty of understanding what neural networks do is not a bug, but a feature’. He went on to explain, ‘neural networks are as successful as they are precise because they are difficult to reason with mathematically’.
In fact, Ilya described that when it comes to how humans see, hear, and understand language, ‘We can’t explain how we do the cognitive functions that we do’, therefore, ‘if computers can produce objects that are similarly difficult to understand, not impossible but similarly difficult, it means you’re on the right track'.
- Ilya Sutskever, Co-Founder and Chief Scientist at Open AI]
Data-Centric AI - Debug Your Model by Debugging the Data
Without AI models being directly interpretable, practitioners are often left to explore alternate avenues to better understand their model performance. Dr Andrew Ng described how practitioners should take an iterative approach to explore model performance within the context of the data used to training it. Using this ‘data-centric’ method, AI teams can identify areas certain model performance and connect that level of performance directly to particular slices of their training data. For example, Dr Ng explains how his team helped explain poor model performance in a defect-detection model at a steel manufacturing plant. He showed that by correlating performance with training data, not only did this explain why a model performed a particular way, but it helped identify slices of training data that could be engineered to improve performance. Tools like Scale Nucleus, help practitioners sort predictions by error metrics or explore interactive confusion matrices, to explain model failures by identifying contributory training data samples.
The Geopolitical Landscape of AI is Changing in Ways That Will have Far-Reaching Consequences
In more than one discussion, the position of China and its race to be a leader in AI was presented as a serious concern. As Eric Schmidt noted, ‘This is a national security challenge for the United States. If you want, for the next 20 or 30 years, for American values, American technology, American startups to be global platforms, we need to get our act together now. Because our competitor is doing that already’. Eric went on to share how he himself had underestimated the pace of AI development in China, ‘ In March, we said that we were one to two years ahead of China in AI. In June, they demonstrated a universal model of a size similar to that of OpenAI's GPT-3, which is a significant accomplishment on China's part. Now, maybe it's not as good, but the important point is they know what they're doing and they're on their way’.
TikTok - A Sign of China's Accelerating AI Progress
Eric also held up TikTok as an example of China’s accelerating progress, which surprised even him, ‘TikTok is a good example of the first really breakout platform from China. By the way, it’s a high-quality platform and much of its apparent success is because it has a different AI algorithm for matching. It matches not who your friends are, but rather what your interests are, using a very special algorithm. That is an example where I would have said that would not occur for another 5 years. So we have relatively little time - maybe a year or two. Not 5 or 10, to get ourselves organized’.
We Need To Change The Way We Fund AI, or Risk Falling Behind
How do we meet this challenge? Part of the answer is how we fund the increasingly fast pace of AI research, through government funding. As Mac Thornberry, Former U.S. Representative for Texas's 13th Congressional District @ US House of Representatives noted, ‘We've got to act with a much greater sense of urgency. And, and I would say, in just as one example, the sort of two-year budget cycle that we've used to for DOD budgets is just not going to cut it with technology that changes this quickly and adversaries that are moving this quickly. He offered that we should fund, ‘[…] some pool of money, for example, related to artificial intelligence, where there is greater flexibility in spending it with full transparency to Congress in how it is spent'. This was also one of the recommendations that the National Security Commission on Artificial Intelligence (NSAIC), chaired by Eric Schmidt, made in their report earlier this year.
- Mac Thornberry, Former U.S. Representative for Texas's 13th Congressional District @ US House of Representatives
Mac reminded us why it’s important to accelerate the pace of AI for national defense, ‘There is always a gap between the development of new technologies and their adoption by militaries, but the fate of nations is decided within that gap’.
To Be Continued!
There are so many more highlights of TransformX that we would love to share with you - especially when it comes to the panel discussions and informative breakout sessions. Do stay tuned for more updates as we bring you more key takeaways from TransformX.
What were your favorite takeaways of TransformX? Let us know here!