Video Rendering with Node.js and FFmpeg

30 September 2022 | 23 min read
Casper Kloppenburg

Introduction

There is no better way to convey your message than by using video, since it is more engaging and fun than any other medium. This is especially true for social media, which has become increasingly video-centric. There is also a growing trend of using video in email campaigns. Wouldn't it be great if we could automate the creation of these videos programmatically, using something like a video template and dynamic data? As a matter of fact, this is possible. The best part is that all of this can be done with open source software and without using C++ or any other low-level language.

In this tutorial, you'll learn how to accomplish this using pure JavaScript and Node.js, along with the help of FFmpeg for the encoding. Let's say we're developing an application for a travel agency, which is looking to generate custom videos tailored to various travel destinations. They want to create these videos in thousands, so we're going to build an application for them to render these custom videos in bulk.

The dynamic video we are going to create using pure JavaScript and FFmpeg.

Would you prefer to use an API rather than managing your own server infrastructure? Check out our Video Editing API for developers. It allows you to design your own templates using a drag-and-drop video editor and automate them through a simple REST API. Better yet, it's less expensive than hosting your own video servers.

Step 1 – Setting up the project

Let's start by opening up our terminal and creating our project's directory:

1$ mkdir video-rendering

Navigate to the new directory:

1$ cd video-rendering

Use npm init to create a new Node.js project. This will create a package.json file.

1$ npm init -y

Our new project requires a few dependencies. We'll use the canvas package to draw our graphics, which is an implementation of the HTML5 Canvas element for Node.js. We also need the ffmpeg-static package to get the latest distribution of FFmpeg, and fluent-ffmpeg to make it easy to use FFmpeg from our Node.js app.

1$ npm i canvas ffmpeg-static fluent-ffmpeg

The following line needs to be added to our package.json file since we'll be using ES6 features:

1{
2  "name": "video-rendering",
3  "type": "module",
4  "version": "1.0.0",
5  "description": "",
6  "main": "index.js",
7  "scripts": {
8    "test": "echo "Error: no test specified" && exit 1"
9  },
10  "keywords": [],
11  "author": "Casper",
12  "license": "ISC",
13  "dependencies": {
14    "canvas": "^2.10.1",
15    "ffmpeg-static": "^5.1.0",
16    "fluent-ffmpeg": "^2.1.2"
17  }
18}

Let's create a new folder called src for our source files, and a folder assets for our media files:

1$ mkdir src
2$ mkdir assets

We're going to use an audio clip, some videos, an image, and a couple fonts as input for our dynamic video. You can find these files in the Github repository with the final code. Make sure to download these files and put them in the assets directory that we just created. Our project directory should now look like this:

1├── assets
2│   ├── catch-up-loop-119712.mp3
3│   ├── caveat-medium.ttf
4│   ├── chivo-regular.ttf
5│   ├── logo.svg
6│   ├── pexels-2829177.mp4
7│   ├── pexels-3576378.mp4
8│   └── pexels-4782135.mp4
9├── node_modules
10│   └── ...
11├── src
12│   └── index.js
13├── package.json
14└── package-lock.json
15

Step 2 – Rendering a simple video

In this step, we'll create a simple video with only a single element, just to go over the basics of video rendering. Following these basic concepts, we will introduce easing, keyframes, drawing contexts, and transformations. Let's start from the beginning by drawing a single frame first.

Drawing a single frame

Open index.js in your editor of choice. To begin with, we'll create a new Canvas instance. Imagine it as a blank sheet of paper 1280 by 720 pixels wide, on which we can draw pictures, shapes, and texts. The context object gives us the interface for drawing on the canvas. Next, we load assets/logo.svg and draw it at position x=100 and y=100 with dimensions 500 by 500. The last step is to make an image file from the canvas and save it to disk:

1import fs from 'fs';
2import { Canvas, loadImage } from 'canvas';
3
4// Create a new canvas of 1280 by 720
5const canvas = new Canvas(1280, 720);
6const context = canvas.getContext('2d');
7
8// Load the image from file
9const logo = await loadImage('assets/logo.svg');
10
11// Draw the image to the canvas at x=100 and y=100 with a size of 500x500
12context.drawImage(logo, 100, 100, 500, 500);
13
14// Write the image to disk as a PNG
15const output = canvas.toBuffer('image/png');
16await fs.promises.writeFile('image.png', output);
17

Now let's execute our code. Run the following from your terminal. The file image.png should be created in your project directory.

1$ node src/index.js
The image that we just drew. It's not very exciting, but hang on.

It's time to step it up a notch. Rather than making a still image, let's make a video with motion.

Rendering a video with motion

When we think about it, a video is nothing more than a series of images displayed in quick succession. These images are also known as frames. In video, we talk about frames per second as a measure of how many images are displayed every second. The lower the FPS, the more jerky the video will be. A minimum of 25 fps is recommended for video, but at Creatomate, we render our videos at 60 fps for smoother animations.

We're going to make a 3-second video with a frame rate of 60, where an image moves from left to right. This requires creating 3 * 60 = 180 slightly different frames, which we then stitch together into a video using FFmpeg. And while we're at it, we also tell it to add a soundtrack to the video.

Let's make an utility function in utils/stitchFramesToVideo.js to do this. This is also where fluent-ffmpeg comes in. Because FFmpeg is a separate program, it can be awkward to use with Node.js. Fortunately, fluent-ffmpeg takes care of that by providing an interface that we can use from JavaScript. Here's the function:

1import ffmpeg from 'fluent-ffmpeg';
2
3export async function stitchFramesToVideo(
4  framesFilepath,
5  soundtrackFilePath,
6  outputFilepath,
7  duration,
8  frameRate,
9) {
10
11  await new Promise((resolve, reject) => {
12    ffmpeg()
13
14      // Tell FFmpeg to stitch all images together in the provided directory
15      .input(framesFilepath)
16      .inputOptions([
17        // Set input frame rate
18        `-framerate ${frameRate}`,
19      ])
20
21      // Add the soundtrack
22      .input(soundtrackFilePath)
23      .audioFilters([
24        // Fade out the volume 2 seconds before the end
25        `afade=out:st=${duration - 2}:d=2`,
26      ])
27
28      .videoCodec('libx264')
29      .outputOptions([
30        // YUV color space with 4:2:0 chroma subsampling for maximum compatibility with
31        // video players
32        '-pix_fmt yuv420p',
33      ])
34
35      // Set the output duration. It is required because FFmpeg would otherwise
36      // automatically set the duration to the longest input, and the soundtrack might
37      // be longer than the desired video length
38      .duration(duration)
39      // Set output frame rate
40      .fps(frameRate)
41
42      // Resolve or reject (throw an error) the Promise once FFmpeg completes
43      .saveToFile(outputFilepath)
44      .on('end', () => resolve())
45      .on('error', (error) => reject(new Error(error)));
46  });
47}

Now let's return to index.js and write the code to generate each frame. In the following code, we're creating 180 images that we store in a temporary directory at tmp/output. We then run stitchFramesToVideo, which calls FFmpeg to convert the frames to a video.

1import fs from 'fs';
2import ffmpegStatic from 'ffmpeg-static';
3import ffmpeg from 'fluent-ffmpeg';
4import { Canvas, loadImage, registerFont } from 'canvas';
5import { stitchFramesToVideo } from './utils/stitchFramesToVideo.js';
6
7// Tell fluent-ffmpeg where it can find FFmpeg
8ffmpeg.setFfmpegPath(ffmpegStatic);
9
10// Clean up the temporary directories first
11for (const path of ['out', 'tmp/output']) {
12    if (fs.existsSync(path)) {
13        await fs.promises.rm(path, { recursive: true });
14    }
15    await fs.promises.mkdir(path, { recursive: true });
16}
17
18const canvas = new Canvas(1280, 720);
19const context = canvas.getContext('2d');
20
21const logo = await loadImage('assets/logo.svg');
22
23// The video length and frame rate, as well as the number of frames required
24// to create the video
25const duration = 3;
26const frameRate = 60;
27const frameCount = Math.floor(duration * frameRate);
28
29// Render each frame
30for (let i = 0; i < frameCount; i++) {
31
32    const time = i / frameRate;
33
34    console.log(`Rendering frame ${i} at ${Math.round(time * 10) / 10} seconds...`);
35
36    // Clear the canvas with a white background color. This is required as we are
37    // reusing the canvas with every frame
38    context.fillStyle = '#ffffff';
39    context.fillRect(0, 0, canvas.width, canvas.height);
40
41    renderFrame(context, duration, time);
42
43    // Store the image in the directory where it can be found by FFmpeg
44    const output = canvas.toBuffer('image/png');
45    const paddedNumber = String(i).padStart(4, '0');
46    await fs.promises.writeFile(`tmp/output/frame-${paddedNumber}.png`, output);
47}
48
49// Stitch all frames together with FFmpeg
50await stitchFramesToVideo(
51  'tmp/output/frame-%04d.png',
52  'assets/catch-up-loop-119712.mp3',
53  'out/video.mp4',
54  duration,
55  frameRate,
56);
57
58function renderFrame(context, duration, time) {
59
60    // Calculate the progress of the animation from 0 to 1
61    let t = time / duration;
62
63    // Draw the image from left to right over a distance of 550 pixels
64    context.drawImage(logo, 100 + (t * 550), 100, 500, 500);
65}
66

Let's run our code again. The file out/video.mp4 should be created in your project directory. As we can see in the video, the logo appears to be moving, but it feels a little flat. We'll see what we can do about that in the next step.

Still not very exciting, but we're making progress.

Step 3 – Using keyframes and easing

While we were able to make the object appear to move from left to right, the animation appears bland. Why is that? When it comes to motion in the physical world, we can say that nothing moves linearly. But that is exactly what we did in our animation – we simply interpolated the x position over time in a linear way. Let's get that fixed.

Applying simple easing

To make the motion feel more natural, we have to change the velocity of the motion over time, so that the object starts slowly, builds up speed gradually, then slows down again as it gets close to the destination. Easing functions help us do this. Let's apply an easing to our animation:

1// ...
2
3function renderFrame(context, duration, time) {
4
5    // Calculate the progress of the animation from 0 to 1
6    let t = time / duration;
7
8    // Apply Cubic easing, see https://easings.net/#easeInOutCubic
9    t = applyCubicInOutEasing(t);
10
11    // Draw the image from left to right over a distance of 550 pixels
12    context.drawImage(logo, 100 + t * 550, 100, 500, 500);
13}
14
15function applyCubicInOutEasing(t) {
16    return t < 0.5 ? 4 * t * t * t : 1 - Math.pow(-2 * t + 2, 3) / 2;
17}
18
The animation after applying easing.

There are a bunch of easing functions we can use to make our animations look more interesting. Look at easing.net for a list of the most popular ones with JavaScript examples.

Keyframes with easing

Up until now, we got away with a simple calculation to calculate the position of the object. But what if we want our object to move to multiple locations? Here's where keyframes come in handy. You can think of keyframes as values at specific points in time. It's easier to make complicated animations with keyframes, so we're introducing a new utility function that helps us with that. First, let's take a look at how it is used before moving on to its implementation:

1// ...
2
3function renderFrame(context, duration, time) {
4
5    // Calculate the x position over time
6    const x = interpolateKeyframes([
7        // At time 0, we want x to be 100
8        { time: 0, value: 100},
9        // At time 1.5, we want x to be 550 (using Cubic easing)
10        { time: 1.5, value: 550, easing: 'cubic-in-out' },
11        // At time 3, we want x to be 200 (using Cubic easing)
12        { time: 3, value: 200, easing: 'cubic-in-out' },
13    ], time);
14
15
16    // Draw the image
17    context.drawImage(logo, x, 100, 500, 500);
18}
19

And here is the implementation of src/utils/interpolateKeyframes.js. Don't forget to import it into src/index.js, then run our code again to see how the video turns out.

1export function interpolateKeyframes(keyframes, time) {
2
3  if (keyframes.length < 2) {
4    throw new Error('At least two keyframes should be provided');
5  }
6
7  // Take the value of the first keyframe if the provided time is before it
8  const firstKeyframe = keyframes[0];
9  if (time < firstKeyframe.time) {
10    return firstKeyframe.value;
11  }
12
13  // Take the value of the last keyframe if the provided time is after it
14  const lastKeyframe = keyframes[keyframes.length - 1];
15  if (time >= lastKeyframe.time) {
16    return lastKeyframe.value;
17  }
18
19  // Find the keyframes before and after the provided time, like this:
20  //
21  //                   Time
22  // ───  [Keyframe] ───┸───── [Keyframe] ──── [...]
23  //
24  let index;
25  for (index = 0; index < keyframes.length - 1; index++) {
26    if (keyframes[index].time <= time && keyframes[index + 1].time >= time) {
27      break;
28    }
29  }
30
31  const keyframe1 = keyframes[index];
32  const keyframe2 = keyframes[index + 1];
33
34  // Find out where the provided time falls between the two keyframes from 0 to 1
35  let t = (time - keyframe1.time) / (keyframe2.time - keyframe1.time);
36
37  // Apply easing
38  if (keyframe2.easing === 'expo-out') {
39    t = applyExponentialOutEasing(t);
40  } else if (keyframe2.easing === 'cubic-in-out') {
41    t = applyCubicInOutEasing(t);
42  } else {
43    // ... Implement more easing functions
44  }
45
46  // Return the interpolated value
47  return keyframe1.value + (keyframe2.value - keyframe1.value) * t;
48}
49
50// Exponential out easing
51function applyExponentialOutEasing(t) {
52  return t === 1 ? 1 : 1 - Math.pow(2, -10 * t);
53}
54
55// Cubic in-out easing
56function applyCubicInOutEasing(t) {
57  return t < 0.5 ? 4 * t * t * t : 1 - Math.pow(-2 * t + 2, 3) / 2;
58}
59
The video with keyframes applied.

Step 4 – Context and transformation

Now with that in place, we are almost ready to put the video together. But first, let's talk about contexts and transformations, two important concepts that we'll be using a lot in the final video.

The drawing context

Whenever we draw something on the canvas, we are giving the rasterizer information about how we want our graphics to look via the drawing context. This can best be explained with an example. Say we want to draw a red rectangle that is rotated 45 degrees. To draw the shape we want, we can use the context to transform and style it:

1import fs from 'fs';
2import { Canvas } from 'canvas';
3
4// Create a new canvas of 1280 by 720
5const canvas = new Canvas(1280, 720);
6const context = canvas.getContext('2d');
7
8// Move the drawing context to x=500 and y=100
9context.translate(500, 100);
10
11// Rotate by 45 degrees
12// The function expects an angle in radians, so we have to convert degrees to radians first
13context.rotate(45 * Math.PI / 180);
14
15// Set the fill style
16context.fillStyle = '#f05756';
17
18// Draw a filled rectangle
19context.fillRect(0, 0, 400, 400);
20
21// Write the image to disk as a PNG
22const output = canvas.toBuffer('image/png');
23await fs.promises.writeFile('image.png', output);
24
If we run the above code, this is what we'll see.

Save() and restore()

We can switch between context states with canvas.save() and canvas.restore(). Imagine it as a stack. Every time we call save(), we add a copy of the current context to the stack. Using restore(), we can get back to the previous state, which removes the topmost context from the stack and makes it the current one. So for every canvas.save(), there should be a canvas.restore(). Though it might seem complicated, this is actually a very convenient way to draw complex graphics, as you'll see in the final code. Let's see it in action with the following example:

1import fs from 'fs';
2import { Canvas } from 'canvas';
3
4// Create a new canvas of 1280 by 720
5const canvas = new Canvas(1280, 720);
6const context = canvas.getContext('2d');
7
8// Set the fill style to blue
9context.fillStyle = 'blue';
10
11// Move the drawing context to x=500 and y=100
12context.translate(500, 100);
13
14// Save the current context
15context.save();
16
17// Rotate by 45 degrees
18// The function expects an angle in radians, so we have to convert degrees to radians first
19context.rotate(45 * Math.PI / 180);
20
21// Set the fill style to red
22context.fillStyle = '#f05756';
23
24// Draw a filled rectangle. This one should be red
25context.fillRect(0, 0, 400, 400);
26
27// Restore the context that was active before we called context.save()
28context.restore();
29
30// Draw another filled rectangle
31// As we restored the context, this one should be blue and not rotated,
32// but still drawn at x=500 and y=100
33context.fillRect(0, 0, 400, 400);
34
35// Write the image to disk as a PNG
36const output = canvas.toBuffer('image/png');
37await fs.promises.writeFile('image.png', output);
38
Two rectangles drawn from different drawing contexts.

This MDN page explains the canvas context in more detail. Although that page talks about the HTML Canvas, almost every property has been ported to Node.js by the canvas package.

Step 5 – Extracting frames from our input videos

Saving video frames with FFmpeg

Are you still with me? Good! We're getting closer to our goal. As you can see in the introduction, we used a few videos as inputs for our final video. In order to use these videos in our final video, we must first extract their frames. Once again, FFmpeg comes to the rescue. Let's make a utility function at src/utils/extractFramesFromVideo.js that instructs FFmpeg to do this.

1import ffmpeg from 'fluent-ffmpeg';
2
3// Example usage: await extractFramesFromVideo('video.mp4', 'frame-%04d.png', 60);
4export async function extractFramesFromVideo(inputFilepath, outputFilepath, frameRate) {
5
6  await new Promise((resolve, reject) => {
7    ffmpeg()
8
9      // Specify the filepath to the video
10      .input(inputFilepath)
11
12      // Instruct FFmpeg to extract frames at this rate regardless of the video's frame rate
13      .fps(frameRate)
14
15      // Save frames to this directory
16      .saveToFile(outputFilepath)
17
18      .on('end', () => resolve())
19      .on('error', (error) => reject(new Error(error)));
20  });
21}
22

Reading the frames back in

After the frames have been extracted by FFmpeg, we can retrieve them using the loadImage method we used before. For our convenience, let's add another utility function at src/utils/getVideoFrameReader.js:

1import fs from 'fs';
2import path from 'path';
3import { loadImage } from 'canvas';
4import { extractFramesFromVideo } from './extractFramesFromVideo.js';
5
6/* Example usage:
7  const getNextFrame = await getVideoFrameReader('video.mp4', 'tmp', 60);
8  await getNextFrame();    // Returns frame 1
9  await getNextFrame();    // Returns frame 2
10  await getNextFrame();    // Returns frame 3
11*/
12export async function getVideoFrameReader(videoFilepath, tmpDir, frameRate) {
13
14  // Extract frames using FFmpeg
15  await extractFramesFromVideo(videoFilepath, path.join(tmpDir, 'frame-%04d.png'), frameRate);
16
17  // Get the filepaths to the frames and sort them alphabetically
18  // so we can read them back in the right order
19  const filepaths = (await fs.promises.readdir(tmpDir))
20    .map(file => path.join(tmpDir, file))
21    .sort();
22
23  let frameNumber = 0;
24
25  // Return a function that returns the next frame every time it is called
26  return async () => {
27
28    // Load a frame image
29    const frame = await loadImage(filepaths[frameNumber]);
30
31    // Next time, load the next frame
32    if (frameNumber < filepaths.length - 1) {
33      frameNumber++;
34    }
35
36    return frame;
37  };
38}
39

Step 6 – Putting it all together

We now have everything we need to make our video rendering script. For the sake of brevity, we'll focus on the most interesting source files, index.js and renderMainComposition.js as the rest is just rehashing what we've already covered. However, the full project is available on Github and you are welcome to check it out and tinker with it.

Let's start with src/index.js. We begin by cleaning up any temporary files left over from a previous run. We then extract all the frames from our input videos. Next, we load the logo image and fonts. Once that's done, it's time to start rendering. Whenever we render a frame for our video, we're using the methods returned from getVideoFrameReader() to load a frame from our input videos into image1, image2, and image3. Then we let renderMainComposition() do the actual rendering, which we'll talk about next. Finally, we stitch the rendered frames of our video into an MP4 using FFmpeg:

1import fs from 'fs';
2import ffmpegStatic from 'ffmpeg-static';
3import ffmpeg from 'fluent-ffmpeg';
4import { Canvas, loadImage, registerFont } from 'canvas';
5import { stitchFramesToVideo } from './utils/stitchFramesToVideo.js';
6import { renderMainComposition } from './compositions/renderMainComposition.js';
7import { getVideoFrameReader } from './utils/getVideoFrameReader.js';
8
9// Tell fluent-ffmpeg where it can find FFmpeg
10ffmpeg.setFfmpegPath(ffmpegStatic);
11
12// Clean up the temporary directories first
13for (const path of ['out', 'tmp/output']) {
14  if (fs.existsSync(path)) {
15    await fs.promises.rm(path, { recursive: true });
16  }
17  await fs.promises.mkdir(path, { recursive: true });
18}
19
20// The video length and frame rate, as well as the number of frames required
21// to create the video
22const duration = 9.15;
23const frameRate = 60;
24const frameCount = Math.floor(duration * frameRate);
25
26console.log('Extracting frames from video 1...');
27const getVideo1Frame = await getVideoFrameReader(
28  'assets/pexels-4782135.mp4',
29  'tmp/video-1',
30  frameRate,
31);
32
33console.log('Extracting frames from video 2...');
34const getVideo2Frame = await getVideoFrameReader(
35  'assets/pexels-3576378.mp4',
36  'tmp/video-2',
37  frameRate,
38);
39
40console.log('Extracting frames from video 3...');
41const getVideo3Frame = await getVideoFrameReader(
42  'assets/pexels-2829177.mp4',
43  'tmp/video-3',
44  frameRate,
45);
46
47const logo = await loadImage('assets/logo.svg');
48
49// Load fonts so we can use them for drawing
50registerFont('assets/caveat-medium.ttf', { family: 'Caveat' });
51registerFont('assets/chivo-regular.ttf', { family: 'Chivo' });
52
53const canvas = new Canvas(1280, 720);
54const context = canvas.getContext('2d');
55
56// Render each frame
57for (let i = 0; i < frameCount; i++) {
58
59  const time = i / frameRate;
60
61  console.log(`Rendering frame ${i} at ${Math.round(time * 10) / 10} seconds...`);
62
63  // Clear the canvas with a white background color. This is required as we are
64  // reusing the canvas with every frame
65  context.fillStyle = '#ffffff';
66  context.fillRect(0, 0, canvas.width, canvas.height);
67
68  // Grab a frame from our input videos
69  const image1 = await getVideo1Frame();
70  const image2 = await getVideo2Frame();
71  const image3 = await getVideo3Frame();
72
73  renderMainComposition(
74    context,
75    image1,
76    image2,
77    image3,
78    logo,
79    canvas.width,
80    canvas.height,
81    time,
82  );
83
84  // Store the image in the directory where it can be found by FFmpeg
85  const output = canvas.toBuffer('image/png');
86  const paddedNumber = String(i).padStart(4, '0');
87  await fs.promises.writeFile(`tmp/output/frame-${paddedNumber}.png`, output);
88}
89
90console.log(`Stitching ${frameCount} frames to video...`);
91
92await stitchFramesToVideo(
93  'tmp/output/frame-%04d.png',
94  'assets/catch-up-loop-119712.mp3',
95  'out/video.mp4',
96  duration,
97  frameRate,
98);
99

We'll finish up by looking at renderMainComposition.js. You can see in the final video that we transition between two scenes: one with polaroid pictures, and one with a logo and caption. The function renderMainComposition() takes care of that. We use interpolateKeyframes() to do the interpolation using two keyframes with Cubic easing.

The renderThreePictures() and renderOutro() functions are then called to render those scenes. As you can see, we are measuring width and height using relative fractions. This allows us to render the video at different resolutions. As you might have noticed, we render at 1280 by 720 as specified in src/index.js, but we can also render at lower resolutions, such as 480x270 to speed up the rendering process, as long as the aspect ratio remains the same.

To get the transition slide effect, we're using context.translate() to offset the location where the scenes are drawn during the transition. To fade the scenes in and out, we're adjusting the opacity with context.globalAlpha.

As a final step, we restore the context by calling context.restore() before returning to the caller, ensuring the same drawing context as when the function was invoked.

1import { interpolateKeyframes } from '../utils/interpolateKeyframes.js';
2import { renderThreePictures } from './renderThreePictures.js';
3import { renderOutro } from './renderOutro.js';
4
5export function renderMainComposition(
6  context,
7  image1,
8  image2,
9  image3,
10  logo,
11  width,
12  height,
13  time,
14) {
15
16  // Interpolate the x position to create a slide effect between the polaroid pictures scene
17  // and the outro scene
18  const slideProgress = interpolateKeyframes([
19    { time: 6.59, value: 0 },
20    { time: 7.63, value: 1, easing: 'cubic-in-out' },
21  ], time);
22
23  // Scene 1 – The three polaroid pictures
24
25  // Move the slide over 25% of the canvas width while adjusting its opacity with globalAlpha
26  context.save();
27  context.translate((0.25 * width) * -slideProgress, 0);
28  context.globalAlpha = 1 - slideProgress;
29
30  // Render the polaroid picture scene using relative sizes
31  renderThreePictures(context, image1, image2, image3, 0.9636 * width, 0.8843 * height, time);
32
33  context.restore();
34
35  // Scene 2 – The outro
36
37  // Move the slide over 25% of the canvas width while adjusting its opacity with globalAlpha
38  context.save();
39  context.translate((0.25 * width) * (1 - slideProgress), 0);
40  context.globalAlpha = slideProgress;
41
42  renderOutro(context, logo, width, height, time - 6.59);
43
44  context.restore();
45}
46

Final thoughts

I hope this article provided you with some insight into how to render dynamic videos using Node.js and FFmpeg, and the techniques involved. The next step might be to import dynamic data from a spreadsheet, so you can generate them in bulk. Or you can deploy this code to a server to render customized videos for your website visitors.

Of course, you need to make your code scalable so that it can handle the load for video rendering, which can quickly get complicated and expensive. This is why we built a cloud service to help you, providing a template editor where you can create your videos and then automate them with no-code integrations and API. If you enjoyed this article, it might be worth checking out.

Happy video rendering!

Start automating today

Start with a full-featured trial with 50 credits, no credit card required.
Get started for free