Reinout van Rees’ weblog

Amsterdam python meetup


Tags: python, django

(Summaries of a talk at the April 2022 meetup).

Interesting takeaways from book ‘Test Driven Development With Python’ - Rok Klancar

The book “test driven development with python” is available for free onine at . The subtitle is test driven development (TDD) for the Web, with Python, Selenium, Django, JavaScript and pals…

It is a quite thick book, but the author is very enthousiastic about programming and the subject of testing, so it actually very readable.

The book consists of three parts:

  • The basics of test driven development and django. So he starts out with explaining web development with django, just enough for having something real to test.

  • Web development sine qua non. He’s serious, at the end of part two you’ll actually have deployed a web application. (And you’ll have tested it, of course.)

  • More advanced topics in testing.

The core idea of test driven development is that you first write a (failing!) test and only then you write the code to get the tests to pass. The first test in the book is whether you have installed django (which you haven’t yet). That sets the tone for the rest of the book :-)

The whole time you’re writing the minimal code needed to get the tests to pass.

Different kind of tests are explained. Functional tests (browser tests in this case, written from the point of view of a user). Unit tests (much finer-grained, written from the point of view of the programmer).

The book is friendly for the reader. Also for a beginner. For instance, it even explains how to read a traceback.

An important concept: refactoring. Either refactoring/changing the code or the tests. Never do them at the same time! And only change code when all the tests are passing. Otherwise you’re like the refactoring cat. A cat jumping into a bath tub and totally panicking: that happens when the code is a mess, without passing tests. It is hard to work in an environment like that. But if you have running tests, it suddenly feels safe to change your code.

The second part even teaches you how to deploy your code to the server in an explicit way (with “fabric”: the appendices have an example with “ansible”). Also in the second part: validating your input.

Part three (“advanced topics”) deals with authentication, for instance. Exploratory code is explained: quick code without tests to explore something. And then of course, adding tests for it and getting it in shape. Also in this part: “mocking”, “continuous integration”, etc.

Everybody can learn something for this book. In a sense, it is a bit of a strange book because it explains a lot of different things. There’s also a bit of a patronizing tone in the book, which took some getting used to. But the author says in the preface that he wrote it for the younger version of himself, so that’s ok.

Graph-based stream processing in Python – Katarina Šupe

Katarina works at Memgraph, a little startup.

A graph is a network structure: a set of nodes (“vertices”) and a set of relationships between them (“edges”). Nodes and relations can have properties (key/value pairs).

Relational databases are what everybody uses. Why would you use a graph database? When is it useful? What are the differences?

  • Graph databases are generally “schemaless”, so you have lots of freedom with your properties.

  • Graph databases have build-in relations. A relational database isn’t necessarily very good/easy/efficient for many-to-many relations.

The most used language to query graph databases is the cypher query language. This is also used in their memgraph database.

Graph analytics (network analysis) generates insichgs hidden in the relationships of the network structure. Pagerank, shortest path, etc. are examples. Recommendation engines also use it. Supply chain risks and fraud detection are business examples.

Stream processing: you start out with (real time) input data, which goes into the “stream processing engine and the output goes on to another item or to an analytics dashboard or so.

They’re making cqlalchemy, a sort of “sqlalchemy for graph databases”. An python OGM (object graph mapper) for graph databases. The idea is that you can write python code instead of a cypher query.

Also interesting: MAGE, Memgraph Advanced Graph Extensions: an open source repo with graph analytics modules.

Lightning talk: managing your laptop - Reinout van Rees

I gave a quick lightning talk about managing your laptop. I got a new laptop yesterday and managed to set it up within a day. I got it working quickly by storing my config in git and also having a readme explaining my setup.

Two very handy tools:

Rotterdam (NL) 2022 python meetup summaries


Tags: python, django, nelenschuurmans

(Some summaries of a talk at the February 2022 Rotterdam python meetup).

Technical note about running the live+online meetup

The meetup was partly live, partly online. The technical setup worked surprisingly well:

  • They had a microsoft Teams channel for the online folks.

  • A laptop was connected to that same channel and showed it on a big screen via a beamer in the room for the “live” folks.

  • The speakers had to connect to the Teams channel to show their slides both online and automatically also in the room.

  • A big microphone gave pretty good sound, even though it was some four meters away from the speaker.

Worked fine! Strange having a meeting without having to wrestle with hdmi adapters :-)

Pandas and excel tables done properly - Thijs Damsma

He showed

When using a jupyter notebook and pandas, you can easily load csv files and do stuff with it. Make nice graphs, for instance.

But… colleagues want xsl sheets…. So you can use a pandas xls exporter. But the output is a raw xls sheet. It works much better if you format the data in xls as a “table”: “format as table”. It sounds like it only formats it visually, but it actually figures out the headings and field types. You get proper sorting and so.

So he wrote a new exporter that exports it as a nice formatted table in excel. Much more useful.

Static code analysis and codemods - Matthijs Beekman

Sometimes people change the structure of python packages you depend on. They themselves as a company also have this problem: you want to evolve your internal libraries to improve them, but you also want to keep using it all the time.

There is something called “codemods” for automated code refactoring. Fewer manual changes to your code in response to changed library code. There are two basic ways of doing this:

  • Dynamic checking: basically manually. Running tests, for instance, and looking at the results.

  • Static analysis: parse code and analyse the structure. You don’t run code, but analyse as good as possible. Python type hints help here a lot. You can get a warning “use a DateTime instead of a three-item tuple” out of the static analysis if a function got refactored to use a datetime intead of a year/month/day tuple.

Some static analysis examples: mypy for static type checking, pylint (code linting), bandit (security testing), black (enforces coding standards).

These static analysers often work with the “ast”, python’s build-in abstract syntax tree. There’s also a “cst”, the concrete syntax tree which you can find in “lib2to3” and “libcst”. Libcst has good documentation on what it is.

At his company, they ship “codemods” together with the changed libraries. It doesn’t work for all corner cases, but it works for a surprising amount of cases, actually. They wrote a command line tool that you could tell to run a certain update.

How Python helps to keep the Netherlands dry - Rob van Putten

Rob works in civil engineering.

We have some 18.000 km of levees in the Netherlands. And we really need them. And… we need to assess them regularly! A key ingredient for calculating levee safety is soil information, the geometry of the levee and some extra parameters like expected water level.

  • Soil info is gathered by taking soil measurements. The standard “GEF” ascii files that are the output are of course easily read with python.

  • Levee geometry you can extract from height measurements. There’s really good data in the Netherlands and there are loads of python libraries to work with the raster data.

  • The parameters like river levels can be found in xls files and postgres databases. Again, there are python libraries for it.

Luckily, the standard program that is used for calculating the stability of levees has an api. Again: you can use python.

So… python can help you with a lot of things and help glue everything together.

But… look out for issues like data quality (BS in, BS out). And automatic calculations??? Engineers like to feel in control and don’t always want automation. Also a problem: management at companies that aren’t always very innovation-minded.

Some extra comments:

  • Don’t forget your tests.

  • Don’t forget documentation. Sphinx is great.

  • Python is great for super fast development.

  • Focus is hard. Python is nice, but there’s rust… unreal…. golang… flutter… Focus! Focus!

Upgrading old django websites with docker - Reinout van Rees

I also gave a talk which I’ll try to summarize :-)

At my company (Nelen & Schuurmans) we made a website for the Dutch government (Rijkswaterstaat), some 10 years ago. I helped build it. A website they used to visit all the municipalities along the mayor rivers in the Netherlands. Why? Well, the water levels keep increasing.

  • The website showed, for various scenarios, the height of the water in excess of the current levee/dike height. So a graph of excess height plotted against the length of the river.

  • Either the levee needs to be strengthened and increased in height (which isn’t always desirable, especially near towns)…

  • Or the river needs more room. A bigger floodplain by removing obstacles like disused brick factories near the river. Or moving a levee a bit back. Or re-using an old river arm as extra flood channel.

  • All those measures are pretty mayor civil engineering works, so you need buy-in from the municipalities and the people living there.

  • So the website showed the effect of the various measures. You could select them in the website and watch the graph with the excess height lower itself a little or a lot. That way, you could make clear which measures help a lot and which not.

Lots of measures were taken along the river Meuse (Maas) during the years. And… they were effective. In july 2021 lots of rainfall increased the water level to high levels, but… there were no mayor problems near the Meuse! I was happy to have contributed a bit.

But… on to the topic of the talk. The website was created some ten years ago with the intention of running it for three or four years. “Can we extend it for a year?”, “can we extend it for another year?”, “can we extend it one last time?”, “can we extend it for one really really last time?”. And last year again :-) There were quite some nods from the audience at this time: it sure happens a lot.

So you have an old django website running on an old python version on an old ubuntu server… How to update it? Often the ubuntu version on the server is older than the one on your laptop. You can try to get everything running with a newer ubuntu + newer django + newer python version, but that will lead to quite some frustration.

What’s better: an incremental approach. You can use docker to good effect for this.

  • First phase: pick a docker image matching the old ubuntu (or other linux variant) version on the server.

  • Add all the “apt-get” dependencies you’ve installed for the website.

  • Get the code running there, trying to pin as much as possible of the python dependencies to what’s actually on the server.

  • Do one update: update your django revision to the latest for your old version. If you have an 1.11.20, pick the latest 1.11.29.

  • Then enable deprecation warnings by running python with -Wa or by setting the PYTHONWARNINGS=always environment variable. Normally, you don’t want those warnings, but in this case they give you handy instructions on what to fix before you do the python or django update. The alternative is to “just” upgrade and to have a non-starting site due to import errors and so: then you have to figure it out yourself. Why not be lazy and use the info that python/django wants to give you?

Now you’ve got a good, clean local representation of what’s on the server, ready for further upgrades.

  • Second phase: upgrade your linux. Ubuntu “xenial” to “bionic”, for instance.

  • This automatically means a newer python version. Check that the site still runs/builds.

  • Probably you need to upgrade one or more libraries to make sure it works with the newer python version.

  • Again: fix the deprecation warnings.

Such a ubuntu/python upgrade normally doesn’t result in many changes. The next phase does!

  • Third phase: upgrade django. In the talk I said you could move in one go from one django LTS (long term support) to the next LTS, provided you previously fixed all deprecation warnings. But… when looking at my latest upgrade project, I moved from 2.2 => 3.0 => 3.1 => 3.2. So… disregard what I said during the meeting and just do what’s in the django documentation :-)

  • You probably need to unpin and upgrade all your dependencies now. Dependencies normally don’t support many different django versions, so if your site is a bit older, these upgrades will be necessary.

  • Fix deprecation warnings again to get your project in a neat state.

  • Check that everything works, of course. This includes running the tests, also of course.

If you do your upgrade project in these three phases, each individual phase will be quite doable. The first phase often is the hardest if the project is already quite old.

Quick personal note: one day after the meetup, a (Dutch) come-work-at-my-company video was ready. I really like to show it here :-)

PyGrunn keynote: learn pattern matching by writing a game - Łukasz Langa


Tags: pygrunn, python

(One of my summaries of a talk at the 2021 10th Dutch PyGrunn one-day python conference).

Note: Łukasz Langa is the author of the wonderful black code formatter.

Note 2: I made a summary of a different pattern matching talk last month.

Łukasz started making a small game to learn about python 3.10’s new pattern matching functionality. Actually programming something that you want to finish helps you to really delve into new functionality. You won’t cut corners.

One of the things he automated in the past was a system to manage his notes, for instance to export notes marked “public” to his weblog. His notes are all in git. Lots of notes. An advice unrelated to the rest of the talk:

  • Keep notes.

  • Own your data.

  • Automate with python.

He showed the source code for his simple game. One of the methods was 15 lines of an if/elif with some more nested if/else statements. if isinstance(...) and so. He then showed the same code with the new pattern matching of python 3.10. Matching on types, matching on attribute values.

match and case may seem very weird now in the way they are implemented. But he thinks they can become pretty useful. You won’t use them a lot, normally. But in some cases it’ll make your code more neat and clear. It seems useful.

PyGrunn keynote: make it work. fast. - Alexander Solovyov


Tags: pygrunn, python, django

(One of my summaries of a talk at the 2021 10th Dutch PyGrunn one-day python conference).

He is the CTO of a big Ukraine fashion marketplace. 10-20k orders per day. So the talk is about them surviving load spikes and so.

In 2016 they had a clojure/clojurescript/react single page app. They saw 30% more requests per second, which caused 3x the processor load. Bad news… One of the things he used was clojure.cache and picked the fast memory cache option. After finally reading the documentation, he discovered it was the cause of their problem. A cache call would fail, which would end up in a retry loop which would in effect cause almost an infite loop. Oh, and his son was only two weeks old and he was sleep-deprived. He managed to replace clojure.cache by memcached, which solved the problem.

Halloween 2017. Wife in hospital. They started losing TCP packets… The main.js was barely loading which is bad in a single page web application :-) The processor load on the load balancers just kept increasing. One of the problems was the marketing department that recently added a fourth level to the menu structure of the website. Which resulted in a 3MB json file with the full menu. To compensate a bit, they increased the gzip level to “9” which made it a little bit smaller. But that also meant a huge increase in the load on the (bad) load balancer that had to do the compressing. Putting it back at “5” solved the issue…

A regular non-busy day in july. Son is in hospital after a vaccine shot. He was also in the hospital. What can go wrong? Well, the site can go down in the night due to a DDOS attack. The solved it by doing a quick if/else on the attacker’s user agent string in the UI code…

2018, they did a pre-shopping-season load test. It turned out their database was hit quite hard. So they used pg_stat_statements to check all their queries. The table with products was the one being hit hard. Which was strange, because they cached it really well. Only… the cache wasn’t working. They missed a key in their cache setting…

Black friday 2018 went without a hitch.

16 november 2020. Black friday just around the corner. But after a new release, the app suddenly starts eating CPU like crazy. Deploying the old release helped. But… the changes between the old and new version didn’t look suspicious. What to do? The took a profiler and started looking at the performance. It turned out some date parsing function was mostly to blame. Parsing dates? Yes, they just started a marketing promotion with a credit card provider with an offer limited to a specific date. So they added the date to the config file. And their was some tooltip showing the date. And there was some library they used that tried some 20 date formats every time… The solution? Parse the config’ed date once upon application startup…

Later they had a problem talking to the database. JVM problem? Can it be the network? Postgres driver problem? PGbouncer? Postgres itself? No. Everything seemed to be working well, only it didn’t work. 20 hours later they stopped everything and started manually executiong SQL select statements in desperation. …. and many of them stayed stuck without an error message??? In the end it was one corrupted file in postgres. So even the super-reliable postgres isn’t always perfect.

PyGrunn: being a python developer in NL in 2021 - Emiel Kempen


Tags: pygrunn, python

(One of my summaries of a talk at the 2021 10th Dutch PyGrunn one-day python conference).

He wants to show us some insights from the 2021 offerzen developer survey . (The report is open data, btw, so you can download the data and do your own analysis on it.)

Background and education.

  • Junior/senior: after 4 year, you’re no longer a junior. Management roles start pick up after 6-10 years. Note: there are differences between countries.

  • Salary: 32k for a junior, 47k intermediate, 60k senior, 73k tech lead/management. The rise in salary is pretty linear with your career progression.

  • Degrees: 57% computer science.

  • A massive amount (44%) is self-taught. 28% at school, 21% at university.

  • People start coding young! 13-18 year is the most prevalent.

  • 75% do some coding for fun outside of their jobs.

Skills and learning

  • The most promising industry: AI and cloud computing. (It is a bit weird that they’re grouped together, perhaps).

  • Python is the most desired language devs want to work with, followed by typescript and go.

  • The most used languages are javascript, sql and typescript. Python is #4.

  • Frequency of learning a new language: 30% every few months, 32% once a year and 33% every few years.


  • Devs like challenging projects (67%). New languages/frameworks is 43%.

  • Non-financial things devs look for when looking for a new job: 58% the language to work with. 50% opportunity to grow. 48% office environment or company culture.

  • 26% want to stay for at least 5 years, 23% at least 2 years. The rest is looking for jobs within 2 years or sooner.

  • Reasons to leave: 48% bad management. 42% looks for a better salary. The work/life balance is number three.

  • Reason to stay: work/life balance (62%), people you work with (55%).

You can take the survey for next year’s report here:

PyGrunn: don’t trust your coverage report - Olga Sentemova


Tags: pygrunn, python

(One of my summaries of a talk at the 2021 10th Dutch PyGrunn one-day python conference).

How do we know if our code works? Perhaps the requirements you were given were unclear. Perhaps there simply is an error in your code. Perhaps your software is used in the wrong way.

There are many sorts of tests. The one she focuses on is unit tests, the one we have the most influence on as programmers.

You can use a traceability matrix where you put every individual part of the requirements (“division by zero results in an error”) in columns. In the rows you mention the tests that verify the specific requirement. Lots of work, perhaps only needed for medical equipment. You’re also restrained by the (in)completeness of the requirements. And it is a manual process…

You can also use, which checks how many lines of your code are covered/executed by your tests.

When you run coverage, you should configure it properly, so that it only reports your python files. You don’t want to include the standard library or an external library in your report. Also exclude your test files from the report, as they normally have 100% test coverage and thus inflate your score.

But you should be a bit careful with the coverage report. In python, you can have statements with “and/or” or “if/else” in them. a = 20 if b > 4 else d/c The lines might show up as “covered” in your report even though some parts of your line weren’t executed. Luckily there’s a --cov-branch flag, which can mark your complex one-line statements as “only partially tested”.

Tip: use --cov-report=html to create nicely colored html output that shows you visually which lines are covered or not.

PyGrunn: large scale python satellite image processing - Ivor Bosloper


Tags: pygrunn, python, django

(One of my summaries of a talk at the 2021 10th Dutch PyGrunn one-day python conference).

Satellites. There are almost 3000 satellites orbiting the earth. Some of them have cameras, which are the interesting ones for him. He started Dacom (now CropX), agricultural software. Satellite imagery is interesting for farmers as you can see how well the crops are growing by analyzing the images.

He did a live demo. They have a website with all 800.000 fields in the Netherlands. For every field they have image data. They can show it both as a regular image, but also color-coded for amount of greenery. And of course a nice graph throughout the year.

You can do all sorts of analysis on it. Look at the variation in crop yields within the field, for instance. You might have to use more fertilizer in the low-yield areas. But you also have to use other data sources, like an elevation map.

They started out experimentally with in 2014. In 2015, ESA launched the “Sentinel 2a” satellite (with a twin, “2b”, in 2017). The data is free, part of the EU Copernicus project! They started using the data in 2016.

The images are huge 800MB: for a 100x100km tile. They download the useful images (the ones without too much cloud cover…) and proces them, use filters, do statistics on them, etc. Lots of separate tools. They use python as the glue to tie everything together.

Some of the processing is done by open source projects provided by ESA. Also they used lots of gdal. They had to battle with performance issues. I/O overhead was one of the bigger problems. They started looking at software-as-a-service providers like sentinelhub: yes, that could work well. But they were not sure about the price they would have to pay for their huge datasets.

The EU provides the satellites and the data for free. But they still had the idea that more people could make use of it. So they recently started the “DIAS” initiative. Multiple data datacenters throughout Europe with locally stored raw data and processed data. So you can host your software there without having to worry about huge data traffic bills. Nice!

They build a website with django where they stored all the processed field data. So per date and per field you’d store min/max/mean/etc values. With postgis/geodjango of course for easy geographical handling.

One of the core tools they use is rasterstats, which calculates the min/max/mean stats for raster images. Probably it uses gdal and numpy and so behind the scenes. These statistics are then stored in django, ready for quick retrieval in the user interface.

PyGrunn: live blogging with wagtail - Coen van der Kamp


Tags: pygrunn, python, django

(One of my summaries of a talk at the 2021 10th Dutch PyGrunn one-day python conference).

Wagtail ( is a nice CMS layer on top of django and python. Wagtail is special in that it doesn’t have a build-in user-facing front end. You are invited to build your own that perfectly suits your needs. Wagtail is opinionated in that sense. It does have a nice admin interface for adding content.

Through google summer of code, an earlier idea he had on live blogging was added to wagtail. It is intended for live blogging a sports match, for instance. Lots of small messages.

It can grab input from slack or telegram. Of course, you can also use the admin interface or a REST api.

There is a bit of a workflow mechanism. You perhaps want to format the first line of an incoming message as a title.

There are multiple ways to set up the blog-viewing webpage. Interval polling, long polling, websockets. Websockets are the best option, but it takes more setup effort. Interval polling is the easiest option: just plain http. is where they do some test live blogging of this conference. The project lives at

PyGrunn: setting up new developers for success - Marijke Luttekes


Tags: pygrunn, python

(One of my summaries of a talk at the 2021 10th Dutch PyGrunn one-day python conference).

Get your house in order first. Before you can start training people, you need to have an onboarding strategy in place. Make sure you’ve got that beforehand. You also need your communication tools to be ready.

Oh, and documentation is important! It is good in general, but there’s a specific advantage during onboarding: if you’ve got it, new developers get an opportunity to be more independent and to look things up for themselves

Let us get started guiding new devs. Psychological safety is important: you need to feel safe. Safe to talk about mistakes, safe to make suggestions.

If you’re the senior: keep in mind your position of power. If you say something, a junior developer might think it is The Law.

What you don’t want: a clone of yourself! It is good for a company to have diversity.

Bring the new developer with you to meetings. And let them speak! (So try to shut up a bit yourself).

Success is a team effort - and so is failure.

New developers need feedback. Compliment in public, negative feedback in private. Oh, and make an effort in general to give compliments. In IT, we’re not very good at communication. Negative comments (“fix this”, “improve this”) in pull requests: yes those come natural to us. But compliments??? We’re not used to it. But they can do so much good, so try to make them more.

Limit contact moments: don’t be a “helicopter parent” that hovers over their “kids”. And stick to 1:1 sessions.

Some nuggets of wisdom:

  • Treat people like they are going to be around for years. You can ask “what if we train someone and they leave?” but you can also ask “what if we don’t train someone and they stay”?

  • Happy employees become your billboard.

Foss4g NL: morning sessions (vector tiles, noise levels, digital twins)


Tags: foss4g

(One of my summaries of talks at the 2021 FOSS4G NL one-day conference).

Vector tiles - Edward Mac Gillavry & Frederieke Ridder

His talk is about their (webmapper) experiences with vector tiles.

They started with map based story telling. You highlight individual locations, you add textual explanation and perhaps graphs. When moving and rotating between the locations, it can be very nice to have a “2.5D” view. For instance they made a map that showed the buildings in the Netherlands, with the correct height, color-coded by height category.

The advantage of vector tiles here is the speed and ease of animation. And that you can “stretch” the

Another nice example is , an interactive map of the liberation of the Netherlands in 1944/1945. A date picker allows you to go through time. Military unit locations, food droppings, liberated areas: everything is done vector-based. A big advantage is that you only have to download small amounts of data. Interactiveness

A new project is, a vector map with various Dutch open datasets. They host it via, a Swiss company. Vector tiles give them lots of flexibility with changing the visualisation of the map: color-coding according to different criteria, for instance. If you select a different parameter to highlight on, you just change the visualisation of the already-downloaded geo data instead of downloading the same data as raster images again.

So: speed, flexibility, ease of use!

Geo plus noise - Rob van Loon

He works at the RIVM, well-known in the Netherlands for their recent health-related work. But they do a lot more, for instance air quality and environmental noise. Noise can negative effects on your health. Road noise, wind turbines, heavy trains, planes, industrial noise.

Noise: they look at it in three ways: source, transmission and recipient. For recipients there’s the public building data (houses, hospitals). For sources, there are also good datasets: the public road network database, railway maps, etc.

Roads and railways are linear. You also need to look at the kind of traffic, the intensities, the kind of paving material or kind of railway sleepers, etc.

Transmission of the noise: there is a lot to take into account. Are their other buildings in the neighbourhood that can reflect noise? What’s the ground use like? A concrete paving lot transmits sound pretty good, grassland dampens it a bit. Are there woods in-between? How does the height map look? Is there line-of-sight between the source and the recipient?

Lots of calculations. And you need to do it multiple times, as they take into account various frequency bands. Low frequencies behave in a different way to high frequencies regarding distance traveled and way of deflection.

A major end result are contour maps with noise levels for the whole of the Netherlands. Since a while, there’s even a 3D model.

Digital twins - Marko Duiker, Thieu Caris

The Dutch province of Zeeland has a lot of use for 3D data. Everything that’s inside the ground (cables and so). The border between

They have an IoT (internet of things) infrastructure based on TTN (The Things Network). One of the IoT users are “multiflexmeters”: open source arduino-based groundwater level meters.

They try to use as much open source as possible. Not everything is possible yet, sadly. But when they can switch, they will.

Digital twin: you try go have as good a digital representation of reality as possible. So that you don’t need to go to some piece of equipment, for instance, when it tells you it needs maintenance: you don’t have to visit it, you can simply trust it.

Digital twin: they way the dunes and the beach change through time. If you present it as 3d, including visualizing differences between the stages, it starts to “live” for you. Interaction is easier and it is easier to work with.

They use “cesium terrain builder + cesium terrain server” to convert 2.5D data to 3D. The cesium stuff is hard to install, but with docker it gets easier (there’ll be a talk about it later). This way they’re able to visualize 3D in a reasonably performant manner. logo

About me

My name is Reinout van Rees and I work a lot with Python (programming language) and Django (website framework). I live in The Netherlands and I'm happily married to Annie van Rees-Kooiman.

Weblog feeds

Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):