OpenCV is a powerful library for computer vision, and while it is traditionally used in languages like C++ or Python, OpenCV.js is the JavaScript version of the library that can be used in web applications.
Steps:
- Create a Docker Container
- Set up Angular Application
- Download JS libraries and some images
- Create Component and Integrate OpenCV.js
2. Set up Angular Application
We will now set up the Angular application using the newly build Docker image. Start by running the following command:
docker compose run --rm -it opencvjs bash
This will start a new container and launch a bash shell inside it. Here, we will use the following command to create a new Angular project:
ng new opencvjs --routing --standalone false --style scss --ssr false --directory .
Then exit the container with exit
.
3. Download JS libraries and some images
Open the opencvjs
folder in your terminal and run the following commands:
mkdir -p src/assets public/models
wget https://docs.opencv.org/4.10.0/opencv.js -O src/assets/opencv.js
wget https://docs.opencv.org/4.10.0/utils.js -O src/assets/utils.js
wget https://github.com/opencv/opencv/raw/refs/tags/4.10.0/data/haarcascades/haarcascade_frontalface_default.xml -O public/models/haarcascade_frontalface_default.xml
wget https://github.com/opencv/opencv/blob/4.x/samples/data/lena.jpg?raw=true -O public/lena.jpg
wget https://balling.dk/images/TheBigBangTheory.jpg -O public/TheBigBangTheory.jpg
The simplest way is to integrate the OpenCV.js library into your Angular application is by adding the opencv.js
to the scripts
array in your angular.json
file. We will also need to add the utils.js
file to the scripts
array.
{
"projects": {
"opencvjs": {
"architect": {
"build": {
"options": {
"scripts": [
"src/assets/opencv.js",
"src/assets/utils.js"
]
}
}
}
}
}
}
While we are at it, we need to add the models
folder to the assets
section of the angular.json
file.
{
"projects": {
"opencvjs": {
"architect": {
"build": {
"options": {
"assets": [
{
"glob": "**/*",
"input": "public"
},
{
"glob": "**/*",
"input": "public/models",
"output": "models"
}
]
}
}
}
}
}
}
Now that we have confured Angular to use OpenCV.js, we can running the application.
docker compose up
This will start the Docker container and run the Angular application. It will then be available at http://localhost:4200. As we modify the code, it will be automatically rebuilt and reloaded in the browser.
Note: OpenCV.js is a large library, so it might take some time to load on the first use.
4. Create Component and Integrate OpenCV.js
Now that OpenCV.js is added to the project, we can start integrating OpenCV functionality into an Angular component.
Generate a component for image processing
To access the docker container, we can use the following command:
docker compose exec -it opencvjs bash
ng generate component opencv
This will generate a new component called OpencvComponent
. The component will be located in src/app/opencv/opencv.component.ts
where we will now write the logic to process images.
/// opencv.component.ts
import { Component, OnInit } from '@angular/core';
declare var cv: any;
declare var Utils: any;
@Component({
selector: 'app-opencv',
templateUrl: './opencv.component.html',
styleUrls: ['./opencv.component.scss']
})
export class OpencvComponent implements OnInit {
constructor() { }
ngOnInit(): void {
this.loadOpenCv();
}
// Function that loads the OpenCV.js library and initializes it.
async loadOpenCv() {
if (typeof cv === 'undefined') {
console.error('OpenCV.js is not loaded');
return;
}
// Set the callback function that will be invoked after the library is loaded.
cv.onRuntimeInitialized = () => {
console.log('OpenCV.js is ready to use!');
// Create an image element and load an image.
let imageElement = document.createElement('img');
imageElement.src = '/TheBigBangTheory.jpg';
imageElement.onload = () => {
this.processImage(imageElement);
};
};
}
// Function that draws the detected faces on an image.
async processImage(imageElement: HTMLImageElement) {
// Create a Mat object from the image element.
const mat = cv.imread(imageElement);
const grayMat = new cv.Mat();
// Convert the image to grayscale.
cv.cvtColor(mat, grayMat, cv.COLOR_RGBA2GRAY);
// Load the face detection classifier (Haar Cascade for frontal face).
const faceCascade = new cv.CascadeClassifier();
const cascadeFile = "haarcascade_frontalface_default.xml"
// Use createFileFromUrl to load the xml file into OpenCV's virtual filesystem.
let utils = new Utils('errorMessage');
utils.createFileFromUrl(cascadeFile, "/models/" + cascadeFile, () => {
faceCascade.load(cascadeFile);
});
// Wait for the face detection classifier to be loaded.
while (faceCascade.empty()) {
await new Promise((resolve) => setTimeout(resolve, 100));
}
// Detect faces in the image.
const faces = new cv.RectVector();
const size = new cv.Size(0.5, 0.5); // Minimum size of the face to detect.
faceCascade.detectMultiScale(grayMat, faces, 1.2, 10, 0, size, size);
// Write the amount of detected faces to console.
console.log('Detected ' + faces.size() + ' faces');
// Draw rectangles around detected faces.
for (let i = 0; i < faces.size(); i++) {
const face = faces.get(i);
let point1 = new cv.Point(faces.get(i).x, faces.get(i).y);
let point2 = new cv.Point(faces.get(i).x + faces.get(i).width,
faces.get(i).y + faces.get(i).height);
cv.rectangle(mat, point1, point2, [255, 0, 0, 255]);
}
// Display the processed image in a canvas element.
const canvas = document.getElementById('canvasOutput') as HTMLCanvasElement;
cv.imshow(canvas, mat);
// Clean up memory
mat.delete();
faces.delete();
grayMat.delete();
faceCascade.delete();
}
}
Here is a brief summary of what the above code does:
Reading the Image:
Thecv.imread
function reads the input image from the HTML image element.Converting to Grayscale:
Thecv.cvtColor
function converts the image to grayscale for face detection.Loading the Face Detection Classifier:
Thecv.CascadeClassifier
class is used to load a pre-trained Haar Cascade for frontal face detection. The classifier is loaded from a remote server using thefetch
function and then written to a Virtual File object using thecreateFileFromUrl
function.Detecting Faces:
Thecv.CascadeClassifier.detectMultiScale
function detects faces in the grayscale image. The size of the minimum face to detect is set to 0.5 by default. The function returns acv.RectVector
containing the detected faces.Drawing Rectangles:
The code then iterates over the detected faces and draws rectangles around them on the original image using thecv.rectangle
function.Displaying the Image:
The processed image is displayed in a canvas element using thecv.imshow
function.Cleaning Up Memory:
Finally, the code deletes all OpenCV objects to free up memory. This is especially important because we are using a C/C++ wrapper that uses dynamic memory allocation.
HTML Template
Here’s an HTML template for the component:
opencv.component.html
<div class="opencv-container">
<h1>OpenCV.js with Angular</h1>
<canvas id="canvasOutput"></canvas>
</div>
Notes
- OpenCV.js may take a while to load, especially on the first use. You can handle this using a loading indicator.
- OpenCV.js includes a wide variety of image processing functions, so you can expand this project by adding more complex processing features like face detection, object tracking, etc.
- Ensure you properly clean up OpenCV matrices (
mat.delete()
) to prevent memory leaks.
This setup allows you to build a powerful web-based image processing application using OpenCV and Angular!
For additional examples have a look at https://github.com/DarkMaguz/angular-opencv-template