Instant colour fill with HTML Canvas

TLDR: Demo is at https://shaneosullivan.github.io/example-canvas-fill/ , code is at https://github.com/shaneosullivan/example-canvas-fill .

The Problem

When building a website or app using HTML Canvas, it’s often a requirement to support a flood fill. That is, when the user chooses a colour and clicks on a pixel, fill all the surrounding pixels that match the colour of the clicked pixel with the user’s chosen colour.

To do so you can write a fairly simple algorithm to step through the pixels one at a time, compare them to the clicked pixel and either change their colour or not. If you redraw the canvas while doing this, so as to provide the user with visual feedback, it can look like this.

This works but is slow and ugly. It’s possible to greatly speed this up, so that it is essentially instant, and looks like this

To achieve this we pre-process the source image and use the output to instantly apply a coloured mask to the HTML Canvas.

Why did I work on this?

I’ve built a web based app called Kidz Fun Art for my two young daughters, optimised for use on a tablet. The idea was to build something fun that never shows adverts to them or tricks them into sneaky purchases by “accident”. I saw them get irritated by the slow fill algorithm I first wrote, so my personal pride forced me to go solve this problem! Here’s what the final implementation of the solution to this problem looks like on the app.

The Solution

[Edit: After initially publishing, a large speed up was achieved by using OffscreenCanvas in this commit]

Start with an image that has a number of enclosed areas, each with a uniform colour inside those areas. In this example, we’ll use an image with four enclosed areas, numbered 1 through 4.

Now create a web worker, which is JavaScript that runs on a separate thread to the browser thread, so it does not lock up the user interface when processing a lot of data.

let worker = new Worker("./src/worker.js");

The worker.js file contains the code to execute the fill algorithm. In the browser UI code, send the image pixels to the worker by drawing the image to a Canvas element and calling the getImageData function. Note that you send an ImageBuffer object to the worker, not the ImageData itself


const canvas = document.getElementById('mycanvas');const context = canvas.getContext('2d');

const dimensions = { height: canvas.height, width: canvas.width };

const img = new Image();
img.onload = () => {
  context.drawImage(img, 0, 0);
  
  const imageData = 
    canvas.getImageData(0, 0, dimensions.width, dimensions.height);

  worker.postMessage({
      action: "process",
      dimensions,
      buffer: imageData.data.buffer,
    }, 
    [imageData.data.buffer]
  );
};

The worker script then asynchronously inspects every pixel in the image. It starts by setting the alpha (transparency) value of each pixel to zero, which marks the pixel as unprocessed. When it finds a pixel with a zero alpha value, it executes a FILL operation from that pixel, where every surrounding pixel is given an incremental alpha value. That is, the first time a fill is executed, all surrounding pixels are given an alpha version of 1, the second time an alpha value of 2 is assigned, and so on.

Each time a FILL completes, the worker stores an standalone image of just the area used by the FILL (stored as an array of numbers). When it has inspected all pixels in the source image, it will send back to the UI thread all the individual image ‘masks’ it has calculated, as well as a single image with all of the alpha values set numbers between 1 and 255. This means that using this methodology, we can support a maximum of 255 distinct areas to instant-fill, which should be fine, as we can fall back to a slow fill if a given pixel has not been pre-processed.

You see in the fully processed image above that all pixels in the source image are assigned an alpha value. The numeric value corresponds to one of the masks, as shown below.

For this image, it would generate four masks as in the image above. The red areas are the pixels with non-zero alpha values, and the white are the pixels with alpha values of zero.

When the user clicks on a pixel of the HTML Canvas node, the UI code checks the alpha value in the image returned from the worker. If the value is 2, it selects the second item in the array of masks it received.

Now it is time to use some HTML Canvas magic, by way of the globalCompositeOperation property. This property enables all sorts of fun and interesting operations to be performed with Canvas, but for our purposes we are interested in the source-in value. This makes it so that calling fillRect() on the Canvas context will only fill the non-transparent pixels, and leave the others unchanged.

const pixelMaskContext = pixelMaskCanvasNode.getContext('2d');
const pixelMaskImageData = new ImageData(
  pixelMaskInfo.width,
  pixelMaskInfo.height
);

pixelMaskImageData.data.set(
  new Uint8ClampedArray(pixelMaskInfo.pixels)
);

pixelMaskContext.putImageData(pixelMaskImageData, 0, 0);

// Here's the canvas magic that makes it just draw the non
// transparent pixels onto our main canvas
pixelMaskContext.globalCompositeOperation = "source-in";
pixelMaskContext.fillStyle = colour;

pixelMaskContext.fillRect(
  0, 0, pixelMaskInfo.width, pixelMaskInfo.height
);

Now you’ve filled the mask with a colour, in this example purple, then you just have to draw that onto the canvas visible to the user at the top left location of the mask, and you’re done!

context.drawImage(
  pixelMaskCanvasNode,
  pixelMaskInfo.x,
  pixelMaskInfo.y
);

It should look like the image below when done

All the code for this is available on Github at https://github.com/shaneosullivan/example-canvas-fill

You can see the demo running at https://shaneosullivan.github.io/example-canvas-fill/

One caveat is that if you try this code on your local computer by just opening the index.html file, it will not work as browser security will not let the Worker be registered. You need run a localhost server and run it from there.

P.S.

Thanks to the Excalidraw team for making it so easy to create these diagrams, what a fantastic app!

Using Bun.js as a bundler

Bun.js is a new (as of 2023) JavaScript runtime that is still very much in development, with it’s primary focus being on extreme speed. I’ve been following it for a while but until today haven’t had a good excuse to use it.

(Edit: There’s some good conversation about this post on Hacker News here)

The author, Jarred Sumner, announced on Twitter today that they have shipped a beta version of a new code bundler for Bun, showing some crazy speed increases over other bundlers. This piqued my interest, as I use a combination of Webpack, Browserify and Uglify on my side projects, in this case my tablet PWA that I built for my kids kidzfun.art, which work but are really slow.

My current workflow can result in a 5 – 7 second wait for all my JS files to rebuild when I save a file, and I thought that Bun could help with this. It turns out I was right! …. with caveats.

You can see the docs for Bun.build() at https://bun.sh/docs/cli/build , and they are well written and quite comprehensive.

My requirements were to

  • Build multiple files quickly, each of which imports multiple other 3rd party files from node_modules.
  • Build minified and non-minified files
  • The resulting file can be included directly in a browser using a <script> tag.

Getting started

I started off by running default build code (for Bun v0.6.1)

const myFiles = [...];

await Bun.build({
  entrypoints: [myFiles],
  outdir: './build'
});

by adding a script to my package.json file

 "build-browser": "bun scripts/build-browser.js"

and this worked just fine. More importantly, it was crazily fast. Instead of 5 seconds it now seemed to finish as the Enter key was still traveling back upwards from executing the command. Nice!

Minification

Minification looks simple in the docs, but unfortunately it’s where the beta nature of Bun shows up. Running the code above with minification

const myFiles = [...];

await Bun.build({
  entrypoints: [myFiles],
  outdir: './build',
  minify: true
});

results in an error being thrown that shuts down the process if there is more than one entry point file.

Bus error: 10

Searching the web didn’t turn up anything, but the solution is to only pass a single entry point file path to Bun.build() if you are minifying the code. Throw that in a for loop to get through all the files and it runs just fine!

A second issue with the default minification is that it broke my app in strange ways that I could not track down – I’m guessing that it’s rewriting the code in some way that is not fully stable yet. I solved it by turning off the syntax minification option

const myFiles = [...];

await Bun.build({
  entrypoints: [myFiles],
  outdir: './build',
  minify:{
    whitespace: true,
    identifiers: true,
    syntax: false // Setting this to false fixes the issue
  }
});

Removing Exports

Bun inserts code that looks like this at the bottom of the built file, in this case from a file called account.ts

var account_default = {};
export {
  account_default as default
};

If you load this in a browser <script> tag it will throw an error. I couldn’t find a way to tell Bun how to not output this, so I had to write a relatively simple function to detect this at the end of each output file and remove it.

Watch issues

I have some code that uses the node-watch module to automatically re-run the build when a file changes. Under the hood this uses the fs.watch function, which it turns out Bun does not yet support. Here’s the Github issue tracking it. I tried to use the native Bun watch functionality, but this executed the script code which is not what I’m looking for.

I came up with a hacky solution that works fairly well, where I use the RunOnSave extension for VS Code to execute

touch ./.last_modified_timestamp

every time I save a file. Then in my build script I use setInterval to check the last modified time of this file and re-run the build if it has changed. Hacky but it works. Hopefully Bun will implement fs.watch soon and I can throw out this code.

function build() {
  ...
}

const timestampFilePath = `${rootDir}/.last_modified_timestamp`;
if (fs.existsSync(timestampFilePath)) {
  let lastModifiedRootFolder = 0;
  setInterval(() => {
    const stat = fs.statSync(timestampFilePath);
    if (stat.mtime.getTime() !== lastModifiedRootFolder) {
      lastModifiedRootFolder = stat.mtime.getTime();
      build();
    }
  }, 500);
}

Vercel build failures

Once everything was running just fine locally on my Mac, I pushed the branch to Github so Vercel would build it (it’s a NextJS application). This threw up a new issue. My build script uses the native Node exec() function to move and copy files. This works just fine on my Mac, but when running the build in the cloud environment all these calls would fail. There’s something unfinished with Bun’s implementation of the child_process module that breaks when run in the Vercel build environment.

My solution to this was to simply change all these execSync calls to use the Node fs functions, e.g.

import fs from 'fs';
....
fs.copyFileSync(srcPath, destPath);
fs.renameSync(path, `${rootDir}/public/${fileName}`);

Epilogue

After a few hours of work, reading up on Bun and working my way through these issues, I now have a much simpler build system that runs in the blink of an eye. My Vercel build times have reduced from 2 minutes to just 50 seconds (that’s all React stuff & fetching node_modules). My watch script runs in a few milliseconds instead of 5 or more seconds, My code is much simpler and I’ve removed Webpack, Browserify and Uglify from my projects.

Thanks so much to the Bun team for a great project. Even as early as it is at time of writing (mid 2023), it’s highly useful, and as they work through all the kinks it will only get more so. I look forward to using it more in the months and years to come!

… oh you’re still here?

The project I sped up using Bun is KidzFun.art, the iPad/tablet app I built for my kids. If you have young kids who

  • like to draw & colour,
  • do maths problems,
  • want to make Gifs from their drawings,
  • should never be shown ads, and
  • somehow care about the open web

have then try out my progressive web app 🙂

React.js: The Documentary & Q&A

In 2022 I had the great pleasure to chat with Ida Bechtle (https://twitter.com/BechtleIda) as part of a retelling of the early story of the creation of the React.js JavaScript library (I wrote about this previously here). The documentary is now available to watch for free on YouTube, as is the Q&A session that most of the cast took part in immediately after the premiere of the film on YouTube.

I’m incredibly impressed with the final product, which is almost totally down to the skill and hard work of Ida, the film maker, along with her employer Honeypot.io, who generously fund the creation of these types of documentaries. The film tells of the very non-linear and difficult path that React took to becoming the behemoth that it is today, and the important parts that so many dedicated people took to make it happen.

I think Tom Occhino summed it up well in the film, saying that taking any one person out of the early development of React would have resulted in the state of the project being fundamentally different to how it is today. I’m proud to have played a tiny role in it’s creation, and use it daily.

I hope you enjoy the film, and take away something valuable from it.

React.js: The Documentary

In mid-2022 I had a great time taking part in a documentary about the JavaScript framework ReactJS by the good people at Honeypot, along with many wonderful engineers who also played a part in its success. The film focuses on the early years in the life of ReactJS, including before it was open sourced and in the year or two afterward.

At time of writing (Dec 2022) I haven’t seen the full film, so I’m not sure how much of my content made it in to the final cut, but I did my best to provide colour into the very early days of ReactJS, where some of it’s early influences came from, and the struggles it faced gaining adoption both inside and outside of Facebook.

It will be released in Feb 2023, and here’s the trailer to whet your appetite

Kidz Fun Art – Tablet app for kids

I’ve built a fun new app for young kids, called Kidz Fun Art. Get it at https://kidzfun.art.

I’m a software engineer, but far more importantly I’m the happy dad of two amazing girls, currently 4 and 6 years old. They love to draw, colour in pictures and tell stories, and when I went looking for good iPad apps for them to use, all I could find were advert infected travesties that try to trick kids into clicking into inappropriate content. I was happy to pay for a clean app, but couldn’t find one.

So, I spent a couple of weeks and built one for them, and they love it! Your kids can use it too now at https://kidzfun.art.

It’s a tablet web app that works on mostly any tablet (I’ve tested it on iPads, Samsung Android and Microsoft Surface tablets). Your kids can us it for

  • Colour in lovely pictures that my wonderfully artistic wife Fran drew for the app.
  • Draw your own pictures on a blank canvas
  • Download images from the internet to colour in.
  • Stick a picture they’d like to copy in the corner so they can practice drawing it.
  • Practice writing their letters and numbers
  • Do simple mathematical problems, auto-generated each time so they never run out
  • Solve some puzzles
  • Draw comic books

I hope your kids enjoy it ask much as mine do

Building Countdoku, a multiplayer Sudoku Web app

tldr: I’ve built a multiplayer Sudoku game which you can play at Countdoku.app. Read on if you are interested in the technical details, (they’re cool, honest!)

Screenshot of Countdoku on a mobile device

Many years ago I became interested in how to generate Sudoku puzzles efficiently, as a thought exercise. Having solved this fairly well writing a program in Java, I continued to build a small multiplayer Java game where people could play the same Sudoku puzzle, taking turns to see who would win. This was long before social media or even Web2, and it ended there.

A few weeks ago I picked up the idea and decided to port it to modern technologies, namely the modern (as of 2022) Web. I decided to build a Progressive Web App (PWA) with NextJS, React, running on Vercel.

Priorities

My priorities with the project were

  • Keep it simple. It doesn’t need to be fancy, just a game you can pop into when you feel like some diversion.
  • Make it fast. It should work well on slow networks and cheap phones.
  • Easy to share. Players should be able to get others to play with them with minimal effort.
  • Kid friendly. I have two young daughters and wanted to make something they can enjoy playing (they love it!)

Technical choices

I am a huge fan of NextJS and React. They make building a web application far easier than it has ever been in the past (I’ve been doing this since 2003 or so, trust me, I’ve seen things you people wouldn’t believe…..). The combination of React’s relatively simple API & easy componentisation, NextJS’ developer environment, and Typescript means I can knock out code quickly and reliably.

Deploying NextJS apps works best on its parent company Vercel’s platform. Of course you can deploy NextJS apps anywhere, but Vercel really is the perfect match for it. Just push to a Github repo and very soon it’ll be live on the web. I also have other Vercel deployed projects, so adding one more was a no-brainer.

My initial plan was to use a persistent connection to receive live updates when another player made a move, so I decided to use Firebase & Firestore, which have SDKs that support this. I’ve had a terrible time in the past using Firestore for complex applications, as it has so many gotchas, however this project was going to be simple, and they even seem to have fixed some gotchas in the past few years – you can now run a query with a ‘not equals’ condition. So fancy!

Need for speed

My initial version of the web app used the standard NextJS and React deployment, with JS running both on the server and on the client. However I found that this was very slow to download on a phone, with over 250kb of JavaScript to download, parse and execute before any application code was used. This was split between React and the large bundle of JS that NextJS sends down the wire to enable fast page transitions, as well as the FirestoreLite SDK.

I realised that I didn’t need any of that. After rendering the page on the server side, the only visual changes that happened during play were:

  • Changing CSS classes on the Sudoku board to highlight some cells.
  • Showing and hiding dialogs.

Knowing this, I disabled all client side NextJS and React JavaScript by exporting the following from the page JS file.

export const config = {
  unstable_runtimeJS: false,
};

I then wrote a single Javascript file called browser/index.js in which I put most of the application code. I separated some other code out into their own small files, e.g. everything to do with accessing the API, and wrote a simple script for copying/transpiling any utility code I shared with the server side, which you can find on Github.

Then, it was a fairly simple thing to use Browserify and Tinify to build the resulting JS file and put it in the public/js folder, to be loaded. While developing, I used node-watch to listen for code changes and rebuild the main.js file.

All of this was duplicating what NextJS gives you for free, but the result was that I loaded 19KB of JavaScript instead of hundreds of KB, and every byte was application code that provided functionality to the user.

Goodbye Client side Firebase

As for the Firebase/Firestore code, I removed the client side SDK, and moved all that functionality to a few simple APIs, which used the Firebase Node SDK instead. Then I simply called these using fetch() on the client. The downside of this is that I no longer had access to the real time updates that Firebase can provide, but those are only available through its full SDK, not the Lite version, and that is simply huge. Replacing it with a regular polling of an API endpoint gives the user basically the same functionality, and I store timestamps on every user change, so the API only returns the changes since your last request, keeping the payload small.

Finally, I added a simple service worker to make this a Progressive Web App. I basically copy/pasted the great example from the Chrome Dev team, and modified it slightly to suit my needs. This not only makes some requests on repeat visits quicker, it also allows the app to run in full screen mode, hiding the browser chrome, which suits this game style of app.

Result

The final result is that Lighthouse on Google Chrome gives a 100 performance score. This is largely because, once the HTML is sent down by NextJS on the server, only 16KB of JavaScript is loaded, and none of it renders anything on the screen at page load time, so there are no layout shifts.

Conclusion

This method of building apps will not suit every use case of course. There are applications where the UI needs to change significantly as the user interacts with it, or new data needs to be loaded and displayed with a UI or template that has not yet been rendered.

However, conversely there are many applications where this is a perfectly valid method of building fast, responsive user experiences. I’m currently building a far more complex application in a similar fashion (hopefully launching in the coming months), and this is working very well. When large UI changes are required, link to a new page and load it. Otherwise have small scripts that slightly modify the DOM as needed, as well as some reusable code for showing/hiding areas of the screen (dialogs, drawers etc), and loading new content rendered server side when necessary.

NextJS and React really do provide a fantastic developer experience, but for anything I want to run on mobile, they’re just too heavy to run client side. This approach keeps the great developer experience, keeps most of the code running on the server, and also gives a great experience to your users no matter how cheap their phone or shoddy their network connection is. Everyone wins.

Finally, I’d love any feedback you have on Countdoku as a game! Feel free to tweet at me, or email me. Try out the easiest mode with young kids, I’ve found they get a kick out of puzzles they can actually solve, before they graduate onto the much harder puzzles later.

Epilogue

One alternative I considered was to use Web Components on the client side. They’re definitely interesting, but since I had a good server side rendering story already, and didn’t need to technically render any new UI on the client side, they would have been overkill. Plain old JS was more than sufficient to make calls to the API, show/hide some pre-rendered dialogs and fiddle with CSS class names on existing DOM nodes. I might give them a go for a future side project however.

Forwarding an email with attachments using SendGrid and Formidable

SendGrid.com is a great service for sending emails programmatically, but it also has an InboundParse feature that will call your webhook for any emails sent to your domain. It can be useful to forward these emails elsewhere, e.g. sending support@mydomain.com to your own personal email.

There’s not really a good example of how to do this, while including attachments, that I could find, so here you go.

import fs from "fs";
// You can use "formidable-serverless" if in a serverless environment like
// AWS Lamba or Google Cloud Functions
import formidable from "formidable";
import sendgridMail from "@sendgrid/mail";
import { AttachmentData } from "@sendgrid/helpers/classes/attachment";
sendgridMail.setApiKey(process.env.SENDGRID_API_KEY);
// See https://www.npmjs.com/package/formidable
interface FormidableFile {
// The size of the uploaded file in bytes.
// If the file is still being uploaded (see `'fileBegin'` event),
// this property says how many bytes of the file have been written to disk yet.
size: number;
// The path this file is being written to. You can modify this in the `'fileBegin'` event in
// case you are unhappy with the way formidable generates a temporary path for your files.
path: string;
// The name this file had according to the uploading client.
name: string | null;
// The mime type of this file, according to the uploading client.
type: string | null;
// A Date object (or `null`) containing the time this file was last written to.
// Mostly here for compatibility with the [W3C File API Draft](http://dev.w3.org/2006/webapi/FileAPI/).
lastModifiedDate: Date | null;
// If `options.hash` calculation was set, you can read the hex digest out of this var.
hash: string | "sha1" | "md5" | "sha256" | null;
}
// Hook this up to your inbound API however you like, e.g.
// using Express
function handleRequest(req, res) {
const form = new formidable.IncomingForm();
form.uploadDir = "/tmp/";
form.keepExtensions = true;
form.type = "multipart";
form.multiples = false;
form.parse(req, async (_err: any, fields, files) => {
handleFormidableResult(fields, files).then(() => {
// Send whatever you want
res.status(200);
res.json({ success: true });
res.end();
});
});
}
async function handleFormidableResult(fields, files) {
const { to, subject, from, html } = fields;
const fileKeys = Object.keys(files);
let attachments: Array<AttachmentData> = null;
let cleanupPromises = null;
if (fileKeys.length > 0) {
const filesInfo = fileKeys.map((key) => files[key]) as Array<
FormidableFile
>;
const attachmentPromises = filesInfo.map((fileInfo) => {
return new Promise((resolve, reject) => {
fs.readFile(fileInfo.path, (err, data) => {
if (err) {
reject(err);
return;
}
const attachment: AttachmentData = {
// Encode the buffer as a base64 encoded string
content: data.toString("base64"),
filename: fileInfo.name,
type: fileInfo.type,
disposition: "attachment",
contentId: fileInfo.hash,
};
resolve(attachment);
});
});
});
// Feel free to do better error handling, where if one file fails to
// read then you still attach others. Keeping it simple here.
attachments = (await Promise.all(attachmentPromises)) as Array<
AttachmentData
>;
// Delete all temp files.
cleanupPromises = filesInfo.map((fileInfo) => {
return new Promise((resolve, reject) => {
fs.unlink(fileInfo.path, () => {
resolve(null);
});
});
});
}
const emailBody = html || fields.text;
const message = {
from: "no-reply@example.com",
to,
subject,
html: emailBody,
envelope: {
from: "no-reply@example.com",
to,
},
attachments,
};
try {
await sendgridMail.send(message);
} catch (err) {
console.error("Sending email failed with error", err, " message ", message);
} finally {
if (cleanupPromises) {
await Promise.all(cleanupPromises);
}
}
}

Uploading a signed video to Cloudinary: a code example

Cloudinary is a fantastic cloud service for storing, serving and transforming images and videos. However the documentation for uploading an image or video from the browser, in a secure fashion, are pretty poor. The various examples are scattered around the place, and none of them shows, in one place how to

  • Sign a request on the server
  • Use that signed request to upload a video
  • Track the progress of the video upload

I had to patch it together for myself, and thought it’d be useful for you all.

// Run in the browser
// This function takes "someId" as a parameter, as an example that you
// may want to link the video upload to some object in your database.
// This is of course totally optional.
function uploadVideo(
someId: number,
file: File,
listeners: {
onProgress: (perc: number) => void;
onComplete: (url: string) => void;
onError: (str: string) => void;
}
): () => void {
let cancelableXhr = null;
fetch("/api/signCloudinaryUpload", {
method: "POST",
cache: "no-cache",
credentials: "include",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
someId
}),
})
.then((res) => {
if (!res.ok) {
listeners.onError("Permission to upload denied");
} else {
return res.json();
}
})
.then((signatureInfo) => {
cancelableXhr = runUpload(
signatureInfo.cloud_name,
signatureInfo.api_key,
signatureInfo.signature,
signatureInfo.public_id,
signatureInfo.timestamp
);
});
function runUpload(cloudName, apiKey, signature, publicId, timestamp) {
const url = `https://api.cloudinary.com/v1_1/${cloudName}/upload`;
const xhr = new XMLHttpRequest();
const fd = new FormData();
xhr.open("POST", url, true);
xhr.setRequestHeader("X-Requested-With", "XMLHttpRequest");
listeners.onProgress(0);
// Update progress (can be used to show progress indicator)
xhr.upload.addEventListener("progress", function(e) {
const progress = Math.round((e.loaded * 100.0) / e.total);
listeners.onProgress(progress);
console.log(
`fileuploadprogress data.loaded: ${e.loaded}, data.total: ${e.total}`
);
});
xhr.onreadystatechange = function(e) {
if (xhr.readyState == 4 && xhr.status == 200) {
// File uploaded successfully
const response = JSON.parse(xhr.responseText);
console.log("response", response);
// Create a thumbnail of the uploaded image, with 150px width
listeners.onComplete(response.secure_url);
}
};
fd.append("api_key", apiKey);
fd.append("public_id", publicId);
fd.append("timestamp", timestamp);
fd.append("signature", signature);
fd.append("file", file);
xhr.send(fd);
}
return () => {
cancelableXhr && cancelableXhr.abort();
};
}
// Run on the server
import { v2 as cloudinary } from "cloudinary";
cloudinary.config({
cloud_name: "", // Your cloud name
api_key: "", // your api key
api_secret: "", // your api secret
});
function signCloudinaryRequest(publicId: string) {
const timestamp = Math.round(new Date().getTime() / 1000);
const apiSecret = (cloudinary.config("api_secret") as any) as string;
const signature = cloudinary.utils.api_sign_request(
{
timestamp,
public_id: publicId,
},
apiSecret
);
return {
api_key: (cloudinary.config("api_key") as any) as string,
signature,
cloud_name: (cloudinary.config("cloud_name") as any) as string,
timestamp,
};
}
function apiHandler(request, response) {
// This assumes that you have a bodyParser set up
const someId = request.body.someId;
const publicId = `${someId}/video`; // use whatever path you like
const signatureInfo = signCloudinaryRequest(publicId);
response.status(200);
response.json({
api_key: signatureInfo.api_key,
cloud_name: signatureInfo.cloud_name,
public_id: publicId,
signature: signatureInfo.signature,
timestamp: signatureInfo.timestamp,
});
response.end();
}

The story of how Google could have killed Facebook with the flick of a switch

As we near the end of this decade, and more importantly the end of the hell that was 2020, I realised two things. First, I survived catching Covid-19, and secondly that I had a good story about the history of Silicon Valley that I’d never written down. I’m still functional, so here, for perpetuity, is my tale.

Back in 2013 I was working in the Ads Interfaces organisation at Facebook, building mostly front end products (other people did the AI, database etc). We had an application called Power Editor which was the kitchen sink of products, and 25% of all Facebook revenue depended on it working. Every single thing you could do with ads on Facebook was supported in Power Editor (we called it P.E. for short). This made it huge, hugely complex and pretty user hostile. However our big spending customers were forced to use it as it was the only way they could efficiently scale – PE had a lot of cool tools for duplication, permuting your ads and working in batches of thousands of changes.

By 2013, PE was creaking under its own weight (about 150k lines of front end JavaScript), and no one wanted to work on it. We had half of one (awesome) engineer supporting it, and she could just about keep it working. I was looking for an opportunity to become a manager, and my manager Brian and I decided that building a team to properly support PE would be a good idea.

I looked into the code base, and was shocked to find that the entire application depended on a technology called WebSQL. It only ran on Chrome, and Google had deprecated WebSQL over a year earlier. I kind of flopped back in my chair, dragged my manager to a room, and told him that Google could shut off 25% of Facebook’s revenue, and lose us all our large accounts, by turning off WebSQL in Chrome, and it could happen any time.

This became a closely held secret in Facebook Ads leadership. We didn’t want to take any chance that word of this vulnerability could get back to Google. They had already deprecated WebSQL, and other browsers had removed it. They would have been well within their rights to just flip a feature flag in Chrome and do the same.

There was a whispered joke among those who knew about it that Google never had to bother building Google +, they could shut us down by changing one boolean in a database.

We quickly put in place a team of 5 engineers and one PM to work on it, with me managing it and coding probably 75% of the time. The plan was fairly complex. There was no easy way to get us off of WebSQL without a full rewrite of 150k lines of JavaScript. We couldn’t just build a new application from the ground up, that would take years to support all the features, and Google could turn off WebSQL at any time. Also, PE was falling apart – it was blocking the entire company from shipping any new ads products at all.

So, we decided to first make it not fall apart with a lot of performance and reliability work. Next, we had a long running project called PE Live, where we took each subset of code and made it read from the live API rather than locally from the WebSQL database. To unblock the other teams we would rewrite the whole thing in ReactJS, whereas it was then built using two frameworks that we had sunset called UkiJS and BoltJS.

This whole process took over three years. By the time it was complete in 2016, we had improved Power Editor so much, with better features, more stability, speed and ease of development for partner teams, that over 50% of all Facebook’s revenue was spent through it. The team grew to 13 engineers, with lots of help from dozens more across the Ads organisation building new APIs and infrastructure to support our work.

Google could have killed it at any time, and there were no complete alternatives for our customers – some third party applications existed, but they were even buggier than PE, were always late with new features (we didn’t have to wait for a new API to be public, they did), and often each specialised in a subset of the features. 50% of our revenue disappearing over night could have happened. That it didn’t and that we moved mountains of code doing the horrible, inglorious work of rewriting hundreds of thousands of lines of spaghetti code in production while people used the product, while also building an infinitely better product, is the most satisfying period of professional work I’ve ever been fortunate enough to experience.

A huge thanks to all the amazing people I worked with on that crazy project, you’re the best team I ever worked with, and I’d hop into the trenches with you again any time!

Power Editor as I left it in 2016, after 6.5 years at Facebook.

[Edit: So this ended up on the front page of Hacker News, and there’s much more conversation about it over at https://news.ycombinator.com/item?id=26086056 ]

Specifying a default form submit button that works with Safari

When you have a HTML form with multiple submit buttons, if the user hits their Enter button, it’ll submit the form and pretend that the first submit button in the form was clicked. Given that the button can have name and value attributes, the server can use this information. For example:

<button name="command" value="save">Save</button>
<button name="command" value="delete">Delete</button>

Unfortunately it’s not always possible to put the button you want to be default first in the DOM. A common hack [1] is to use CSS to either float or absolutely position the default button to appear where you want it. In this case you can put a hidden button at the top of the form that is a duplicate of the one visible to the user, e.g.

<form>
  <div style="display:none">
    <button name="command" value="save">This is a duplicate save button</button>
  </div>

  <button name="command" value="delete">Delete</button>
  <button name="command" value="save">Save</button>
</form>

This works well, except in mobile Safari. It seems that because the duplicate button is hidden, the browser ignores it when the user hits Enter. To fix this, instead of using style=”display: none”, absolutely position it off the screen, e.g.

<form>
  <div style="position: absolute; top: -10000px; left: -10000px;">
    <button name="command" value="save">This is a duplicate save button</button>
  </div>

  <button name="command" value="delete">Delete</button>
  <button name="command" value="save">Save</button>
</form>

And there you have it, a nasty hack to work around the fact there is no way to explicitly specify the default button.