Today we talk about maps, interactive maps on the web. There are plenty of options out there to create beautiful maps with exciting features and engaging content. In this article we will focus on maps based on vector data, created using OpenLayers.

More specifically, we are going to look into how OpenLayers is currently evolving to use the WebGL API in order to offer much better rendering performance. And what exactly is going on inside the rendering code. Let’s dive in!

Using WebGL for rendering maps in OpenLayers | © Peakpx

Some context

OpenLayers is a popular library for creating interactive maps. It has been around for quite some time now, and has been through numerous refactorings and rewrites. Apart from creating maps, OpenLayers also provides many utilities for transforming data from and to numerous formats such as GeoJSON, GPX and Mapbox Vector Tiles, as well as interacting with standard WMS, WFS etc. protocols.

The recent rise in vector data usage follows the appearance of high-performance renderers such as Mapbox. This has been a real challenge for OpenLayers: traditionally, the assumption was that large vector datasets would be baked in images and served either in tiles (for example through WMTS) or custom-size images (through WMS). This is because rendering actual shapes instead of images containing said shapes would not happen too often, and not in too large volumes.

Using WebGL for rendering maps in OpenLayers | © Camptocamp

Even though the focus has not been on vector data in the past years, the vector renderer available in OpenLayers is highly optimized and can cope with thousands of complex shapes, simplifying lines and polygons in real time by using a spatial index for quick filtering of geometries outside of the current view. Still, when putting the renderer to the test on really dense datasets (typically, vector tiles) it struggles to keep up and performance becomes a limiting factor in many situations.

Workarounds were found, optimizations were done, and people could still manage to somehow have interactive maps with a lot of vector data and still rely on OpenLayers like they used to. But at the end of the day, the root issue is still present: currently, rendering shapes is just not fast enough.

A journey to WebGL

Where we stand

As it is now and has been for some time, there are two main alternatives for rendering graphics on a web page: the Canvas API and the WebGL API.

OpenLayers has traditionally been able to rely on different methods for rendering: in the past it used to be able to render things using SVG or even the DOM, but nowadays its main rendering capabilities rely on the Canvas API. This API provides excellent visual quality, consistent results and powerful text rendering. Its main drawback is probably the fact that it is simply too slow for complex operations.

On the other hand, the WebGL API has the potential of being extremely fast, but on the other hand, it is costly to implement, highly error prone and very, very bad at drawing text. In fact, it will absolutely not help you with text at all, and will offer you the pleasure of becoming an expert at kerning, ligatures and other joys of life.

Implementing WebGL-based renderers in OpenLayers has been an ongoing work for several years now, and has shown some impressive progress recently. Still, the mainstream renderers based on Canvas often remain the default option, providing all the functionalities and fulfilling the majority of use-cases.


Next milestone: not (so) far ahead!

Currently, OpenLayers only offers a WebGL-based renderer for points. Any other kind of vector data (lines, polygons) has to go through the Canvas-based renderer. This can still help in scenarios like data visualization, but not when it comes to fully featured vector tiles (for example OpenStreetMap data).

The obvious next step is to be able to render lines and polygons as well! Both types of geometry are significantly harder than points to handle: WebGL does not offer line or polygon primitives, so both of these have to be broken down into triangles for rendering.

The good news is that there are many sources of inspiration online because it is essentially a solved problem (kind of) in WebGL. On the other hand, rendering geometries on a map offers different challenges than, say, a 3D mesh. This means we have to do all the work ourselves and not rely on existing tools! We must have fun along the way, otherwise what's the point?

Getting to work

This is the part where this article gets technical. We are going to look into the current WebGL rendering logic and how it will evolve.

Rendering points is easy

Like other types of geometries, points have to be broken down into triangles to be rendered.

The usual approach is to build a so-called quad:

Using WebGL for rendering maps in OpenLayers | © Camptocamp
Using WebGL for rendering maps in OpenLayers | © Camptocamp



See? four vertices, two triangles, one quad.

This very simple setup gives us a working surface to represent the point using anything from simple colors to actual images:

This approach has several advantages: each point will require a fixed amount of triangles, and the logic for generating the vertex positions is trivial (adding an offset to the point center).


From data to triangles

Points have to be read from a vector data source; this kind of source can take its data from anywhere: remote files, WFS services, dynamically generated objects, etc.

As such, it provides the perfect abstraction for rendering objects from many different origins.


The class responsible for rendering points with WebGL is called WebGLPointsLayerRenderer and it works like this:

Using WebGL for rendering maps in OpenLayers | © Camptocamp

Of course reality is a bit more complex, but this is an accurate representation of the general workflow. Also it is worth noticing that the triangles generation step is done in another thread to avoid impacting the smoothness of the user experience too much.

This is working for points and, to be fair, there is no reason this will not work for lines or polygons either! Something like that for example:

Using WebGL for rendering maps in OpenLayers | © Camptocamp

There is an added step here, but it is not too complex: discriminating geometries by type and storing them into separate lists to be processed later on. This is essential because each type of geometry has its own specificities.


Ok so this is easy, right?

Except, there are also two new processes that we have not talked about:


These two, on the other hand, will be much more involved. Let’s take a closer look.

Using WebGL for rendering maps in OpenLayers | © Camptocamp

At the end of the day, aren’t polygons like squares... with more sides?

Not quite no. Polygon geometries can count hundreds or thousands of vertices and possess intricate concave shapes. They might even have holes!

Breaking down a polygon into triangles (also known as triangulation) is a complex topic that has been researched for decades. Several algorithms were invented with varying outcomes and strengths. For our case, we need an algorithm that prioritizes speed over precision and can handle complex polygons with holes, and fortunately there is an open source library that does exactly that!

This library is actually a part of the Mapbox-gl-js renderer which I mentioned earlier, and one of the reasons why it is so performant. Reimplementing a complex algorithm like this one would probably not bring any benefit, and the library has a permissive licence so there is not much to worry about.

So, let’s look at what triangulating a polygon will look like:

Using WebGL for rendering maps in OpenLayers | © Camptocamp

What we call a polygon is usually defined by a single outer ring and zero to many inner rings. The outer ring is the boundary of the polygon and inner rings define holes in it.

As we have seen before, a ring shape (which is essentially a closed line) cannot be used right away to draw a delimited area using WebGL: we have to triangulate. The schema presented above shows how a very simple polygon is drawn, but in reality things can get much more complex (this image is an example taken from the mapbox/earcut library).

All in all, this was not too hard you might say! I agree, this was actually the easy part. For lines, we are on our own. Hence why we will not cover them in this article but in part 2!

Coming up next

In this article we have taken a look at what happens inside the rendering engine of a library like OpenLayers, at least for points and polygons. In part 2 we will try to make ourselves familiar with lines and their many pitfalls, and have a look at the actual implementation with a live demo. Happy coding!

For more information,

do not hesitate to get in contact with us!

By submitting this form, I accept that the information entered will be used for the purposes described in the privacy policy.