Scale RapidThe fastest way to production-quality labels.
Scale StudioLabeling infrastructure for your workforce.
Scale 3D Sensor FusionAdvanced annotations for LiDAR + RADAR data.
Scale ImageComprehensive annotations for images.
Scale VideoScalable annotations for video data.
Scale TextSophisticated annotations for text-based data.
Scale AudioAudio Annotation and Speech Annotation for NLP.
Scale MappingThe flexible solution to develop your own maps.
Scale CatalogCreate, enrich, and enhance eCommerce data.
Scale Enterprise AIModels to support your business use cases.
Scale NucleusThe mission control for your data
Scale LaunchShip and track your models in production
Scale Content UnderstandingManage content for better user experiences
Scale InstantMLNext-day machine learning models, without ML expertise
Scale SpellbookThe platform for large language model apps
Scale SyntheticGenerate synthetic data
Retail & eCommerce
Content & Language
Smart Port Lab
AI Readiness Report 2022
CEO and Founder at Scale AI
Alex is the CEO and Founder at Scale AI. He was inspired to solve ML infrastructure problems and accelerate the development of AI through his work at Quora, where he worked as a technical lead. Alex worked as an algorithm developer at Hudson River Trading and as a software engineer at Addepar. He attended, and...
Staff Research Scientist at Google AI
Christian Szegedy is a researcher at Google Brain. He discovered adversarial examples, invented BatchNorm, and his computer vision research has laid the foundations for modern convolutional neural network architectures.
Today, he's working on formal reasoning and dreams of creating an automated software...
The title of your paper on adversarial examples was Intriguing Properties of Neural Networks. I mean, it was almost like you had discovered this curiosity, and it wasn't really framed in a ... in the context that they are now. Right now it's like safety is the primary context in which people talk about them.
Yeah, so actually it was ... It's a stupid story because I had these adversarial examples lying in my drawer for more than a year, or almost two years. I discovered them in 2011, but then Wojciech came to me and wanted to write a paper with all kinds of ... So, I was too lazy to publish it and then Wojciech said, "Okay, you have this thing and we can combine with other stuff and then publish a joint paper with various intriguing properties." And as people started to bail out and they didn't put their own stuff because it was, like, not interesting enough or whatever, and then the paper mostly was about adversarial examples.
But if I would have known it beforehand, I would have just wrote a paper, like, with Wojciech alone, or maybe completely alone, and then I'd have ... like, just with the title of Adversarial Examples. So, actually, we planned with my manager to write a paper with a title like Blind Spots in Neural Networks a year earlier just on that topic, but we just ... I just was too lazy to do it.
What would you say are the other exciting or parencially underrated areas of research in AI right now?
I think that goes back to another one of your questions, what should we do about AI and being misused?
So, a lot of people do lip service and say, "Yeah we do this and that." But I think it's, so how do you combat certain negative effects of machine learning. And what are those negative effects, because a lot of them are kind of invisible. So, how do people make decisions about our lives, so basically like type. For example, insurance companies, agencies and stuff like that. So, all this and this is just a small thing I don't really know everything, so as AI gets applied more and more then all these biases that go into the AI systems will affect everybody more and more. And I think that's something one should do much more research and take it much more seriously.