(One of my summaries of the 2023 Dutch foss4g.nl conference in Middelburg).
Bart is the full-time maintainer of MapLibre since early 2023.
Maplibre is a map-rendering toolkit. Actually, it are two rendering toolkits: one for javascript/web and one for native (android/iphone). Native is the one he maintains.
It renders vector data. The output is also vector tiles. Normally, a map server is used for the tiles, but you can also store tiles locally for offline usage. A server for vector tiles doesn’t need to be a big machine: the layers he demoed were hosted on a raspberrypi in his basement.
Vector tiles need styles. Those are defined in json. This gives you lots of flexibility. Night mode, different renderings for biking or walking, etc.
Rendering is done on the client. This needs a GPU. On the web, it uses webgl. On the client it is OpenGL. Only… apple wants you to use their own “Metal” language. They are currently implementing support for that (he showed a quick demo).
They are funded by sponsors. Several bigger companies (like Meta) sponsor it as it is way cheaper to do it collectively with other companies.
(Part of a series of talks about automatic measurement tools in the province of Zeeland. This one is about collecting all the data from the sensors.)
Originally the data would all be send via LORA, but even there there are multiple standards. But some is send via GPRS. Or https. Or legacy ftp. Some older systems need pulling instead of pushing the data themselves.
So… what about a generic software solution for recieving, transforming and publishing sensor data. This is what they wanted:
No vendor lock-in.
Open source components.
Scalable and highly available. Near real-time.
A generic data model (“quite a challenge”…).
Publication based on open standards. They also use the standard “openapi” method of describing their APIs for easy interoperability.
No archival function. Data stays in the system for two months or so, afterwards it is the responsibility of the client to take care of the long-term storage.
The scalable part of the system is handled with docker containers (easy implementation, you can package a complete stack, devops stuff integration). Originally they developed with NodeJS. Because that was single-treaded they tried out “Go”. But Go isn’t that well-known as NodeJS or Python.
Hosting via kubernetes. Easy scaling. Pay-as-you-use. They use the managed azure kubernetes service, but without using any azure-specific functionality so that they can move if needed.
Internally the workers are organised in “pipelines”. Individual steps connected into one whole.
For testing out simple scripts they made a “generic python worker” that you can start with a short python script as input. Handy for testing without needing to do a complete new deployment.
Citizen participation: he helped with earthquake information for the gas-related earthquakes in the north of the Netherlands. Originally, the data wasn’t really available from the official monitoring instance. A PDF with a historical overview and an API with the 30 latest quakes. A first quick website with an overview became quite popular as it was the only real source of information. It was also used by the province!
It started out on a small server in his home. Then he moved to a VPS. Then the website was mentioned in a big national newspaper and the server was brought down by the traffic.
Later the website was improved. Hosting was done relatively simply with ubuntu, postgres, cron, highcharts, geoserver, jspdf, etc. Gasbevingen portaal.
New functionality is address-based generation of all the relevant data for your own house. Handy for the damage claims that have to happen now. He notices that the lawyers of the oil companies also use the same data from his website now :-)
What changed in the last ten years? The KNMI (the official source of info) is sharing much more information than previous. Though it is aimed at researchers instead of the citizens.
Citizen participation like this can be very attractive where the trust in the government is lower. Don’t make it too complex: we’re nerds and it is easy to go overboard.
There is an ever increasing demand for mobile data and 5G. At the same time, there is an ever increasing resistance against actual new cell towers… As a provider, you can adjust your existing equipment. Using 5G, using more frequencies, etc. But eventually you run against hard limits and need new ones.
Wazir made several analyses to determine the expected extra demand combined with the available supply. For demand, they looked at population density, traffic data, railway station usage, etc.
For supply they started with https://antenneregister.nl : 187k antennas! But
individual antennas should be grouped into “sites”. All antennas on one
building’s roof is one site. So postgis’s ST_within
was used on
buildings. And antennas close to one anonther are probably all on the same
physical cell tower.
The result was an estimated 300-700 extra sites. But…. only 16-36 are from actual capacity problems. The rest are for (mandatory) improving coverage and planned speed improvements.
I gave this talk myself. There will be separate detailed blog posts later on :-)
My name is Reinout van Rees and I program in Python, I live in the Netherlands, I cycle recumbent bikes and I have a model railway.
Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):