Coder Perfect

Taking screenshots in-browser with HTML5, Canvas, and JavaScript

Problem

The “Report a Bug” or “Feedback Tool” from Google allows you to select an area of your browser window to produce a screenshot that is then submitted along with your bug report.

Jason Small took a screenshot and put it in a duplicate question.

What are they doing to do this? From here, you can load Google’s JavaScript feedback API, and their explanation of the feedback module will show you how to take screenshots.

Asked by joelvh

Solution #1

JavaScript can read the DOM and use canvas to render a fairly accurate depiction of it. I’ve been developing a script that turns HTML into a canvas image. I decided today to put it into practice by sending feedbacks in the manner you described.

The script enables you to construct feedback forms that incorporate a snapshot taken from the client’s browser, as well as the form itself. The snapshot is based on the DOM, and as such, it may not be 100% accurate to the real representation because it does not take an actual screenshot, but rather constructs one based on the data available on the page.

Because the entire image is built on the client’s browser, it does not require any server rendering. The HTML2Canvas script is still at a very early stage of development, as it doesn’t parse nearly as many CSS3 attributes as I’d like it to, and it doesn’t enable loading CORS pictures even if a proxy is available.

Browser compatibility is still limited (not because more couldn’t be supported; we simply didn’t have time to make it more cross browser compatible).

Take a look at the following samples for further information:

http://hertzen.com/experiments/jsfeedback/

change The html2canvas script and associated samples are now available separately here.

2nd edit Another indication that Google utilizes a very similar mechanism (the only major difference, according to the documentation, is their async method of traversing/drawing) is this presentation by Elliott Sprehn from the Google Feedback team: http://www.elliottsprehn.com/preso/fluentconf/

Answered by Niklas

Solution #2

With getUserMedia(), your web app may now snap a ‘native’ screenshot of the client’s whole desktop:

Take a look at the following example:

https://www.webrtc-experiment.com/Pluginfree-Screen-Sharing/

For the time being, the client must use Chrome and enable screen capture support at chrome:/flags.

Answered by Matt Sinclair

Solution #3

As Niklas indicated, you may take a screenshot in the browser using the html2canvas library. I’ll expand on his response by giving an example of how to take a screenshot with this library (“Proof of Concept”):

After retrieving the image as a data URI, you may show it to the user and allow him to draw a “bug region” with the mouse, and then transmit a screenshot and region coordinates to the server using the report() function in onrendered.

In this case, an async/await version was created, along with the useful makeScreenshot() function.

Simple example (here jsfiddle) that allows you to take a screenshot, choose a location, describe a bug, and send a POST request (the main function is report()).

Answered by Kamil Kiełczewski

Solution #4

Using the getDisplayMedia API, you can get a screenshot as a Canvas or a Jpeg Blob / ArrayBuffer:

FIX 1: For Electron.js, only use getUserMedia with chromeMediaSource. FIX 2: Instead of returning null, throw an error. FIX 3: Prevent the error by fixing the demo: The user gesture handler must contact getDisplayMedia.

// docs: https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia
// see: https://www.webrtc-experiment.com/Pluginfree-Screen-Sharing/#20893521368186473
// see: https://github.com/muaz-khan/WebRTC-Experiment/blob/master/Pluginfree-Screen-Sharing/conference.js

function getDisplayMedia(options) {
    if (navigator.mediaDevices && navigator.mediaDevices.getDisplayMedia) {
        return navigator.mediaDevices.getDisplayMedia(options)
    }
    if (navigator.getDisplayMedia) {
        return navigator.getDisplayMedia(options)
    }
    if (navigator.webkitGetDisplayMedia) {
        return navigator.webkitGetDisplayMedia(options)
    }
    if (navigator.mozGetDisplayMedia) {
        return navigator.mozGetDisplayMedia(options)
    }
    throw new Error('getDisplayMedia is not defined')
}

function getUserMedia(options) {
    if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
        return navigator.mediaDevices.getUserMedia(options)
    }
    if (navigator.getUserMedia) {
        return navigator.getUserMedia(options)
    }
    if (navigator.webkitGetUserMedia) {
        return navigator.webkitGetUserMedia(options)
    }
    if (navigator.mozGetUserMedia) {
        return navigator.mozGetUserMedia(options)
    }
    throw new Error('getUserMedia is not defined')
}

async function takeScreenshotStream() {
    // see: https://developer.mozilla.org/en-US/docs/Web/API/Window/screen
    const width = screen.width * (window.devicePixelRatio || 1)
    const height = screen.height * (window.devicePixelRatio || 1)

    const errors = []
    let stream
    try {
        stream = await getDisplayMedia({
            audio: false,
            // see: https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamConstraints/video
            video: {
                width,
                height,
                frameRate: 1,
            },
        })
    } catch (ex) {
        errors.push(ex)
    }

    // for electron js
    if (navigator.userAgent.indexOf('Electron') >= 0) {
        try {
            stream = await getUserMedia({
                audio: false,
                video: {
                    mandatory: {
                        chromeMediaSource: 'desktop',
                        // chromeMediaSourceId: source.id,
                        minWidth         : width,
                        maxWidth         : width,
                        minHeight        : height,
                        maxHeight        : height,
                    },
                },
            })
        } catch (ex) {
            errors.push(ex)
        }
    }

    if (errors.length) {
        console.debug(...errors)
        if (!stream) {
            throw errors[errors.length - 1]
        }
    }

    return stream
}

async function takeScreenshotCanvas() {
    const stream = await takeScreenshotStream()

    // from: https://stackoverflow.com/a/57665309/5221762
    const video = document.createElement('video')
    const result = await new Promise((resolve, reject) => {
        video.onloadedmetadata = () => {
            video.play()
            video.pause()

            // from: https://github.com/kasprownik/electron-screencapture/blob/master/index.js
            const canvas = document.createElement('canvas')
            canvas.width = video.videoWidth
            canvas.height = video.videoHeight
            const context = canvas.getContext('2d')
            // see: https://developer.mozilla.org/en-US/docs/Web/API/HTMLVideoElement
            context.drawImage(video, 0, 0, video.videoWidth, video.videoHeight)
            resolve(canvas)
        }
        video.srcObject = stream
    })

    stream.getTracks().forEach(function (track) {
        track.stop()
    })

    if (result == null) {
        throw new Error('Cannot take canvas screenshot')
    }

    return result
}

// from: https://stackoverflow.com/a/46182044/5221762
function getJpegBlob(canvas) {
    return new Promise((resolve, reject) => {
        // docs: https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/toBlob
        canvas.toBlob(blob => resolve(blob), 'image/jpeg', 0.95)
    })
}

async function getJpegBytes(canvas) {
    const blob = await getJpegBlob(canvas)
    return new Promise((resolve, reject) => {
        const fileReader = new FileReader()

        fileReader.addEventListener('loadend', function () {
            if (this.error) {
                reject(this.error)
                return
            }
            resolve(this.result)
        })

        fileReader.readAsArrayBuffer(blob)
    })
}

async function takeScreenshotJpegBlob() {
    const canvas = await takeScreenshotCanvas()
    return getJpegBlob(canvas)
}

async function takeScreenshotJpegBytes() {
    const canvas = await takeScreenshotCanvas()
    return getJpegBytes(canvas)
}

function blobToCanvas(blob, maxWidth, maxHeight) {
    return new Promise((resolve, reject) => {
        const img = new Image()
        img.onload = function () {
            const canvas = document.createElement('canvas')
            const scale = Math.min(
                1,
                maxWidth ? maxWidth / img.width : 1,
                maxHeight ? maxHeight / img.height : 1,
            )
            canvas.width = img.width * scale
            canvas.height = img.height * scale
            const ctx = canvas.getContext('2d')
            ctx.drawImage(img, 0, 0, img.width, img.height, 0, 0, canvas.width, canvas.height)
            resolve(canvas)
        }
        img.onerror = () => {
            reject(new Error('Error load blob to Image'))
        }
        img.src = URL.createObjectURL(blob)
    })
}

DEMO:

document.body.onclick = async () => {
    // take the screenshot
    var screenshotJpegBlob = await takeScreenshotJpegBlob()

    // show preview with max size 300 x 300 px
    var previewCanvas = await blobToCanvas(screenshotJpegBlob, 300, 300)
    previewCanvas.style.position = 'fixed'
    document.body.appendChild(previewCanvas)

    // send it to the server
    var formdata = new FormData()
    formdata.append("screenshot", screenshotJpegBlob)
    await fetch('https://your-web-site.com/', {
        method: 'POST',
        body: formdata,
        'Content-Type' : "multipart/form-data",
    })
}

// and click on the page

Answered by Nikolay Makhonin

Solution #5

In the year 2021, here is a complete screenshot example that works with Chrome. The end result is a blob that is ready to send. Request media > grab frame > draw to canvas > transfer to blob is the workflow. If you wish to save memory, look into using OffscreenCanvas or ImageBitmapRenderingContext.

https://jsfiddle.net/v24hyd3q/1/
// Request media
navigator.mediaDevices.getDisplayMedia().then(stream => 
{
  // Grab frame from stream
  let track = stream.getVideoTracks()[0];
  let capture = new ImageCapture(track);
  capture.grabFrame().then(bitmap => 
  {
    // Stop sharing
    track.stop();

    // Draw the bitmap to canvas
    canvas.width = bitmap.width;
    canvas.height = bitmap.height;
    canvas.getContext('2d').drawImage(bitmap, 0, 0);

    // Grab blob from canvas
    canvas.toBlob(blob => {
        // Do things with blob here
        console.log('output blob:', blob);
    });
  });
})
.catch(e => console.log(e));

Answered by BobbyTables

Post is based on https://stackoverflow.com/questions/4912092/using-html5-canvas-javascript-to-take-in-browser-screenshots