Products
Scale RapidThe fastest way to production-quality labels.
Scale StudioLabeling infrastructure for your workforce.
Scale 3D Sensor FusionAdvanced annotations for LiDAR + RADAR data.
Scale ImageComprehensive annotations for images.
Scale VideoScalable annotations for video data.
Scale TextSophisticated annotations for text-based data.
Scale AudioAudio Annotation and Speech Annotation for NLP.
Scale MappingThe flexible solution to develop your own maps.
Scale CatalogCreate, enrich, and enhance eCommerce data.
Scale Enterprise AIModels to support your business use cases.
Scale NucleusThe mission control for your data
Scale LaunchShip and track your models in production
Scale Content UnderstandingManage content for better user experiences
Scale InstantMLNext-day machine learning models, without ML expertise
Scale SpellbookThe platform for large language model apps
Scale SyntheticGenerate synthetic data
Solutions
Retail & eCommerce
Defense
Logistics
Autonomous Vehicles
Robotics
AR/VR
Content & Language
Large Language Models
Resources
Resource Library
Blog
Events
Open Datasets
Interviews
Documentation
Guides
Customers
Pricing
Conference
AI Readiness Report 2022
Company
Speaker

François Chollet
Author of Keras
Deep Learning Researcher, Google
Bio
François Chollet is a software engineer at Google, where he leads the Keras team. He is the author of a popular textbook on deep learning. He also does research on abstraction, reasoning, and how to achieve greater generality in artificial intelligence.
The Next Five Year of Keras & TensorFlow with François Chollet of Google
March 26, 2021
5:00 PM - 5:30 PM (30 minutes) - Coordinated Universal Time
The field of deep learning is still fast-evolving, and increasingly large numbers of developers are starting to leverage deep learning in their applications, across an ever-expanding set of use cases. In this talk, you'll learn about the latest developments in the Keras & TensorFlow ecosystem, and you'll find out how Google is preparing for the next generation of deep learning research and applications.