Field Unknown Model.Unknown Field

Blog

Company Updates & Technology Articles

November 6, 2017

Product

Increasing the Performance of Dynamic Next.JS Websites

One of the most obvious signs that a site is made with Next.JS is that you load the page, you click a link to another section and it loads instantly. That’s because when a page is loaded, Next.JS will download in the background all the other pages linked with tags. This works great for static pages, but as soon as you have to fetch some data in the getInitialProps, loading that page will take time even if it the page code is prefetched. And what is worse — unless you explicitly add a loading indicator, it will usually take time with no indication in the UI! Prefetching and caching data is not trivial — for example, it could be wrong to serve a slightly outdated version of certain information. Also, different data could have different caching rules. Here at Scale we decided to implement a lightweight caching and data prefetching strategy, which allows having this instant navigation experience without compromising data freshness.


Caching in getInitialProps


On certain parts of our site, we know it is OK to show data that is a few minutes old. For those cases, we implemented a simple caching strategy that stores data with a 5 minute TTL. The implementation uses an abstraction around the fetch api that we called cached-json-fetch:


import lscache from 'lscache';
import fetch from 'isomorphic-fetch';
const TTL_MINUTES = 5;
export default async function(url, options) {
// We don't cache anything when server-side rendering.
// That way if users refresh the page they always get fresh data.
if (typeof window === 'undefined') {
return fetch(url, options).then(response => response.json());
}
let cachedResponse = lscache.get(url);
// If there is no cached response,
// do the actual call and store the response
if (cachedResponse === null) {
cachedResponse = await fetch(url, options)
.then(response => response.json());
lscache.set(url, cachedResponse, TTL_MINUTES);
}
return cachedResponse;
}
export function overrideCache(key, val) {
lscache.set(key, val, TTL_MINUTES);
}

We then use this in the getInitialProps of our pages:


import React from 'react';
import cachedFetch, { overrideCache } from 'lib/cached-json-fetch';
const SOME_DATA_URL = '/some_data';
export default class SomePage extends React.Component {
static async getInitialProps(ctx) {
const someData = await cachedFetch(SOME_DATA_URL);
const isServerRendered = !!ctx.req;
return { someData, isServerRendered };
}
componentDidMount() {
// When the page is server-rendered,
// we override the value in the client cache
if (this.props.isServerRendered) {
overrideCache(SOME_DATA_URL, this.props.someData);
}
}
}

Now, when we client-navigate to this page, it will check the cache first before fetching the data from the server. Alternatively, on a full page reload, it will always fetch the latest data from the server and reset the TTL.


Prefetching data


With a caching layer like this in place, it is extremely straightforward to add prefetching. In the example above, all we need to do is call getInitialProps so the cache is populated with all the necessary data to load the page. Now, if this page is client-navigated to before the TTL expires, it will load instantly just like a static page! To achieve this, we can create a simple abstraction over Link, which not only downloads the page structure in the background but also calls its getInitialProps to populate the cache:


import React from 'react';
import Link from 'next/link';
import { resolve, parse } from 'url';
import Router from 'next/router';
export default class DataPrefetchLink extends Link {
async prefetch() {
if (typeof window === 'undefined') {
return;
}
const { pathname } = window.location;
const href = resolve(pathname, this.props.href);
const { query } = parse(this.props.href, true);
const Component = await Router.prefetch(href);
if (this.props.withData && Component) {
const ctx = {pathname: href, query, isVirtualCall: true};
await Component.getInitialProps(ctx);
}
}
}

Using this in our pages is as simple as: <Link prefetch withData href="…">. In case you have some calls in your page’s getInitialProps that do not use a cache, you can use the isVirtualCall flag in the context to avoid making them when the method is called for caching purposes only. For example:


import React from 'react';
import cachedFetch, { overrideCache } from 'lib/cached-json-fetch';
import fetch from 'isomorphic-fetch';
const SOME_DATA_URL = '/some_data';
export default class SomePage extends React.Component {
static async getInitialProps(ctx) {
const someData = await cachedFetch(SOME_DATA_URL);
const isServerRendered = !!ctx.req;
const isVirtualCall = ctx.isVirtualCall;
// No need to call this when prefetching the page,
// since this data won’t be cached
let someNonCachedData;
if (!isVirtualCall) {
someNonCachedData = await fetch('/some_non_cached_data')
.then(response => response.json());
}
return { someData, someNonCachedData, isServerRendered };
}
componentDidMount() {
// When the page is server-rendered,
// we override the value in the client cache
if (this.props.isServerRendered) {
overrideCache(SOME_DATA_URL, this.props.someData);
}
}
}

The key element that makes this possible is that the programmatic prefetch API through Router.prefetch returns the page’s constructor, so we can call the getInitialProps directly on that object! You can find this extended Link component in npm as data-prefetch-link. In conclusion, we learned that with some fine-tuning we can make our dynamic next.js pages load as fast as static pages, without sacrificing the freshness of the data or the flexibility to choose which data we want to cache. We hope this helps you make your web platforms faster!

Read more

August 25, 2017

Customers

Guest Post: Cloudinary Uses Scale to Focus on Image Compression


We recently worked with the team at Cloudinary to help build and evaluate a

better image compression and quality measurements. The results of Cloudinary’s

work on Scale were very insightful and we wanted to share it broadly to

demonstrate how more companies can leverage human judgments to build high

quality features.


Without further ado,

Tal Lev Ami, CTO

of Cloudinary, wanted to write about how they used Scale:


Here at Cloudinary, we provide a cloud-based tool that enables our users to
compress images and video for their websites and apps. Our goal is to
preserve the visual integrity of the content, but deliver the smallest file
size to any device or browser to ultimately optimize website performance and
end user satisfaction.

One of the hallmarks of the Cloudinary solution is the ability to automate
many functions of image compression, so that developers don’t have to spend
time tweaking each photo and making multiple copies of different sizes and
resolutions to fit every possible scenario. Compression algorithms can be
tricky because they’re trying to make changes that have the smallest visual
impact, but different images can react differently to compression.

As we were developing the algorithm for our “q_auto” capabilities — which
strikes a balance between visual quality and file size — we needed to test
it to understand how the resulting images compared to the human eye. Enter
Scale.

Many image compression formats — like JPEG 2000 and JPEG XR — have been
tweaked to score well on particular metrics, such as peak signal-to-noise
ratio (PSNR). But these don’t always correlate with human perception on
image quality.

We leveraged Scale to compare pairs of images and give us perspective on
which image was liked most by humans. With Scale, we did a variety of tests,
comparing several formats, including WebP, JPEG 2000, JPEG XR (lossy) Lepton
(MozJPEG, recompressed with Lepton), FLIF, BPG, Daala, and PNG8
(pngquant+optipng). We also were able to get input on the difference between
the uncompressed original image vs. a compressed version.

Scale enabled us to create A/B comparisons that were viewed by human
observers. We submitted over 4,000 image comparisons to Scale, sending at
least four independent Scale requests for each pair of image. This resulted
in at least eight actual human comparisons for each pair of images. The
outcome of these comparisons were evaluated beside other perceptual metrics
such as PSNR, Google’s Butteraugli, DSSIM (Structural (Dis)Similarity) and a
new metric Cloudinary developed called
SSIMULACRA (Structural SIMilarity Unveiling Local And Compression Related
Artifacts).

The results showed that overall, PSNR is “correct” in only 67 percent of the
cases. Butteraugli gets it right in 80 percent of the cases, and DSSIM in 82
percent of the cases. Our new metric, SSIMULACRA, agrees with human
judgments in 87 percent of the cases. Looking just at the high-confidence
human judgments, we found about 78 percent agreement for PSNR, 91 percent
for both Butteraugli and DSSIM, and almost 98 percent agreement for
SSIMULACRA. You can
read more about SSIMULACRA and these results on the Cloudinary blog. Or if you want to give it a try:
SSIMULACRA is free and open-source software!


The results of Scale comparisons gave us useful data points to validate our
metrics and provided more insights into the compression benchmarks we are
running and the comparison of various image formats. And from these insights
we were able to improve our visual perception metrics and fine-tune our
“q_auto” functionality so we know how aggressively we can compress images.

Through this process we were impressed not only by the useful data points
derived from the Scale, but also the great support we got from the company
and the product’s ease-of-use, all which came at a reasonable price.


Read more

July 16, 2017

Product

Announcing Our Series A With Accel

We are excited to announce that we have raised a $4.5 million Series A round of funding led by Accel. Along with this funding, Accel’s Daniel Levine has joined Scale’s board. The funding will be used to invest in our rapid growth, expand our offerings, and grow our team.


When we started Scale ten months ago, our goal was to build the simplest API for training data. Since then we have released six APIs (Image AnnotationOCR TranscriptionCategorizationComparison, and Data Collection), and we process millions of API requests each month.


This success has confirmed our conviction that there is no better time than now to build Scale. We are currently at an inflection point in the age of artificial intelligence (AI). Just like software before it, AI will have an enormous effect on productivity and enable incredible new technologies. Scale works with many companies and technologies at the forefront of this shift, including self-driving cars, intelligent assistants, and voice analytics. Our customers agree that integrating AI with accurate human intelligence is crucial to building reliable AI technology. As a result, we believe Scale will be a foundation for the next wave of development in AI.


This financing will allow us to grow our engineering, operations, support, marketing, and sales teams to continue on our journey to make our customers more productive and efficient. We’re committed to continuously improving our costs, speed, and quality and to providing a quality customer experience.


We’d also like to thank our colleagues, customers, partners, and investors for supporting on this journey to build APIs for human intelligence. We look forward to continuing to support companies and developers in creating innovative software in the age of AI. This is just the beginning.


We are actively hiring exceptional engineers, designers, AI researchers, and more. If you are interested in joining Scale, check out our careers page.

Read more

January 16, 2017

Product

Training Data For Self Driving Cars With Scale

One of the largest automobile companies in the world uses Scale to help build self-driving cars.


Self-driving cars (SDCs) are one of the most exciting technologies being developed today. If they become a reality, the benefits that come will significantly impact our lives in several ways:

  • Lives will be saved
  • Transportation speeds go up → Time is saved.
  • People will be freed up to be productive or enjoy entertainment on the road

One of the big challenges for SDCs is extremely high-stakes computer vision. SDCs need to quickly recognize various road conditions, animals, people, other cars, etc. They need to understand how these things are behaving and how they’re likely to behave in the near future. Computer vision, by way of deep learning, has come a long way recently, but it needs to keep improving to ensure SDCs are safe and reliable.


SDC companies are using Scale to tag objects by passing images through Scale’s Image Recognition API. Scalers label cars, people, signs, traffic lights, etc. to help train their models.


An example call would look something like this:


import scaleapi
client = scaleapi.ScaleClient('SCALE_API_KEY')
client.create_annotation_task(
    instruction='Draw a tight box around each **car** and **pedestrian**.'
    attachment_type='image',
    attachment='https://imagedelivery.net/wLbZE4_NzVVdgHc15St55g/e2bea5ad-8ea7-4726-6f9e-9e99264c0600/public',
    objects_to_annotate=['car', 'pedestrian'],
    with_labels=True,
    callback_url='http://www.example.com/callback'
)

At Scale, we deeply believe in investing in our customer experience. It’s great to have customers that push us, particularly in ways that benefit all customers. We worked closely with the automobile company to make sure our quality was up to their standards, while maintaining a reasonable cost. Some examples of specific things we did to deliver for the customer:

  • Our Scalers take online tests to determine their skill level at a specific task. We improved the Image Recognition test to ensure that Scalers were bounding objects within a pixel of accuracy on each image. Not every customer needs that accuracy, but it’s nice to have nonetheless!
  • We invested in the software tool our Scalers use to annotate images to improve their UX. Improving the performance and layout of the tool made a big difference. We took the cost savings and passed it through to our customers.
  • We noticed our Scalers were still taking a bit longer to get through the images than we had expected. When we investigated, we realized they were spending a lot of time waiting for images to load. We spun up pre-fetching of the next image to speed things up. And again, we were able to take the cost savings and pass them through to our customers.

There are countless more things we did to improve the Scale experience that our now available to all of our customers. And we promise to keep improving it.


At Scale, we want to make our customers more productive and efficient. AI and self-driving cars are an awesome application of Scale towards that goal. If you need help with training your computer vision for self-driving cars, join our Slack and ask. We’re confident we can deliver the cost, quality and Scale (pun intended) you need!

Read more