Reinout van Rees’ weblog

Utrecht javascript meetup summaries


Tags: django

Normally I attend python/django meetups, but tonight I attended the Utrecht (=middle of the Netherlands) javascript meetup for the second time. I don’t have notes of the first one (that was much too visual to make writing something down worthwile), but this time I have.

WebRTC datachannel - Roland van Laar

Roland van Laar talks about web applications for education and webrtc data channels.

Webrtc is p2p (peer to peer) for browsers. You can use it for audio, video. Datachannel is a bit like p2p websockets.

Why did he wanted to use it? Well, he writes educational software. In elementary schools in the Netherlands, that often means a teacher and a digiboard. He showed a demo with a website that showed a six-figure code that the teacher then had to type in on his accompagnying phone app. After thus connecting the webbrowser to the phone, he could then steer the webbrowser from the phone. Advancing sheets, and so.

The advantage about webrtc and p2p: it is a direct connection between the phone and the browser, without being dependent on the ofttimes-flaky school’s internet connection.

How do you build up a connection? The one that starts creates an RTCPeerConnection and creates a “session offer”. The one that wants to connect can create a reply, accepting the offer, and send it back. The initial communication has to happen over another type of connection as you haven’t, well, set up the p2p connection yet :-) REST and web sockets are often used for this.

He wrote an alternative, using the local webstorage (localStorageWebRTC). This way, with javascript you can create the “session offer” and store its data locally in the browser’s localstorage.

It is fun. If you want to experiment, look at bistri. For instance They have samples you can use.

A big problem with webrtc is that it provides almost no feedback about what succeeds or not. So if it doesn’t work, you’re out of luck. If it works, it works...

In the end it turned out to be too much pain to get/keep everything working. The current solution uses websockets.

Distributed Data Protocol (DDP) / Meteor - Peter Peerdeman

Peter Peerdeman works at .

Real time websites are fun. Like for live airplane flight tracking.

According to its website, Meteor’s distributed data protocol (DDP) is a simple protocol for fetching structured data from a server, and receiving live updates when that data changes. DDP is like “REST for websockets”.

Real-time functionality in websites is going to be the default. We are going to need to connect to live firehoses like twitter. DDP is a way to share between applications. It has no http, no REST and no request/response cycle. It is pub/sub and remote procedure calls. Basically a bi-directional json.

He showed a live demo with, everybody in the audience was happily dragging photos (avatars of the attendees) around on the map. No XHR requests visible in the chrome dev tools, but there was a lot of content moving over a websocket connection.

The DDP spec consists of four parts:

  • Connections. You can build up connections and ping each other (and get a “pong” back).
  • Managing data. The client can subscribe to data and the server can call back with “added” and “ready” meassages. Similar “changed” and “removed”. You can also unsubscribe. of course.
  • Remote procedure calls. You can call a method on the server and get a reply back (well, a “I heard you” message, followed by bits of content and at the end a “ready” message).
  • Errors. Errors are passed in a specific json format.

He then showed the demo’s code. One code base, with some explicit if/else parts with Meteor.isClient statements. So: one short 77-line (!) javascript file that runs both on the server and on the client!

(He mentioned as that helped him grab the attendees’ avatars from the meetup page).

What if you cannot use Meteor? Well, it is just a protocol. It is not tied to Meteor. He illustrated that with a second DDP demo build with Node.js on the server and a separate relative normal javascript file on the client.

When to use DDP? You could use it for full stack real-time web applications (if you dare). You can also create realtime functionality as an addition to an existing app. Also nice: the “Internet of Things”: lots of possibilities there!

Can it scale? Well, it is limited to the max connections your box can sustain. If you do loadbalancing, you need “session affinity”. There’s simple autoscaling if you use certain hosted solutions.

See also for the code examples.

His closing comment: push the web forward! And give DDP a try. Use DDP instead of yet-another-ad-hoc-message-protocol.

z3c.recipe.usercrontab ported to python 3


Tags: python, buildout, django

z3c.recipe.usercrontab is a simple buildout recipe to add content to your buildout user’s crontab.

Our django sites often have something like this in our buildout configs, just to give you an idea of what you can do with it:

# Start supervisor (which in turn starts django and celery)
# when the server starts.
recipe = z3c.recipe.usercrontab
times = @reboot
command = ${buildout:bin-directory}/supervisord

recipe = z3c.recipe.usercrontab
times = 14 3 * * *
command = ${buildout:bin-directory}/django clearsessions

Easy peasy. Especially starting the site @reboot is handy. You could also add an entry in /etc/init.d/your_site, but this way most of the setup of the site is handled for you by a single bin/buildout call.

The previous release was in 2010: it simply didn’t need any changes, it just performed the job it did and did it well :-) Now I’ve made a few small changes, mostly in the test code. Another package ported to python 3!

Automation for better behaviour


Tags: django, python, nelenschuurmans

Now... that’s a provocative title! In a sense, it is intended that way. Some behaviour is better than other behaviour. And automation that notifies you of your good or bad behaviour helps to provide better behaviour, right?

Note 2015-09-07: I changed the wording and removed some of the more provocative terminology and phrasing. The original is in my git repo so if you want the real thing you can mail me :-)

Absolute values

I think there are absolutes you can refer to, that you can compare to. Lofty goals you can try to accomplish. Obvious truths (which can theoretically be wrong...) that are recognized by many.

Take programming in python. PEP 8, python’s official style guide is recognized by most of the python programmers as the style guide they should adhere to. At least, nobody in my company complained if I adjusted/fixed their code to comply with PEP 8. And the addition of bin/pep8 in all of our software projects to make it easy to check for compliance didn’t raise any protests. Pyflakes’ utility is even clearer, as it often points at real errors of obvious omissions.

For django projects, possible good things include:

  • Sentry integration for nicely-accessible error logging.
  • Using a recent and supported django version. So those 1.4 instances we still have at my workplace should go the way of the dodo.
  • Using proper releases instead of using the latest master git checkout.
  • Using migrations.
  • Tests.

Automation is central to good behaviour

My take on good behaviour is that you should either make it easy to do the good thing or you should make non-good behaviour visible.

As an example, take python releases. As a manager you can say “thou shalt make good releases”. Oh wow. An impressive display of power. It reminds me of a certain SF comic where, to teach them a lesson, an entire political assembly was threathened with obliteration from orbit. Needless to say, the strong words didn’t have a measurable effect.

You can say the same words at a programmer meeting, of course. “Let’s agree to make proper releases”. Yes. Right.

What do you have to do for a proper release?

  • Adjust the version in from to 1.2.
  • Record the release date in the changelog.
  • Tag the release.
  • Update the version number in to
  • Add a new header for 1.3 in the changelog.

Now... That’s quite an amount of work. If I’m honest, I trust about 40% of my colleagues to make that effort every time they release a package.

There is a better way. Those very same colleagues can be relied on to make perfect releases all the time if all they have to do is to call bin/fullrelease and press ENTER a few times to do all of the above automatically. Thanks to zest.releaser.

Zest.releaser makes it easier and quicker to make good releases than it is to make bad/quick/sloppy releases by hand. Note that I didn’t write zest.releaser at my current job, I needed it at previous jobs, too :-)

Further examples

Now... here are some further examples to get you thinking.

All of our projects are started with “nensskel”, a tool to create a skeleton for a new project (python lib, django app, django site). It uses “paste script”; many people now use “cookie cutter”, which serves the same purpose.

  • For all projects, a proper test setup is included. You can always run bin/test and your test case will run. You only have to fill it in.

  • bin/fullrelease, bin/pep8, bin/pyflakes: if you haven’t yet installed those programs globally, you have no excuse not to use them now.

  • If you want to add documentation, sphinx is all set up for you. The docs/source/ directory is there and sphinx is automatically run every time you run buildout.

  • The README.rst has some easy do-this-do-that comments in there for when you’ve just started your project. Simple quick things like “add your name in the author field”. And “add a one-line summary to the and add that same one to the description”.

    I cannot make it much easier, right?

    Now... quite some projects still have this TODO list in their README.

    Just suggesting and not checking is not enough. A tool that checks for this specific “todo” comment in combination with github pull request integration would probably help.

Conclusion: you need automation to enable policy

You need automation to enable policy, but even that isn’t enough. I cannot possibly automatically write a one-line summary for a just-generated project. So I have to make do with a TODO note in the README and in the Which gets disregarded more often than I like.

If even such simple things get disregarded, bigger things like “add a test” and “provide documentation” and “make sure there is a proper release script” will be hard to get right. I must admit to not always adding tests for functionality myself, though.

I’ll hereby torture myself with a quote. “Unit testing is for programmers what washing your hands is for doctors before an operation”. It is an essential part of your profession. If you go to the hospital, you don’t expect to have to ask your doctor to disinfect the hands before the operation. That’s expected. Likewise, you shouldn’t expect your clients to explicitly ask you for software tests: those should be there by default!

Again, I admit to not always adding tests. That’s bad. As a professional software developer I should make sure that at least 90% test coverage is considered normal at my company. In the cases where we measure it, coverage is probably around 50%. Which means “bad”. Which also means “you’re not measuring it all the time”. 90% should also be normal for my own code and I also don’t always attain that.

Our company-wide policy should be to get our test coverage at least to 90%. Whether or not if that’s our policy, we’ll never make 90% if we don’t measure it.

And that is the point I want to make. You need tools. You need automation. If you don’t measure your test coverage, any developer or management policy statement will be effectively meaningless. If you have a jenkins instance that’s in serious neglect (70% of the projects are red), you don’t effectively have meaningful tests. Without a functioning jenkins instance (or, you cannot properly say you’re delivering quality software.

Without tooling and automation to prove your policy, your policy statements are effectively worthless. And that’s quite a strong value statement :-)

Lessons learned from buildout a django site with a reactjs front-end


Tags: django, python, nelenschuurmans

My colleague Gijs Nijholt just posted his blog entry lessons learned from building a larger app with React.js, which is about the javascript/reactjs side of a django website we both (plus another colleague) recently worked on.

Simplified a bit, the origin is a big pile of measurement data, imported from csv and xml files. Just a huge list of measurements, each with a pointer to a location, parameter, unit, named area, and so on. A relatively simple data model.

The core purpose of the site is threefold:

  • Import, store and export the data. Csv/xml, basically.
  • Select a subset of the data.
  • Show the subset in a table, on a map or visualized as graphs.

The whole import, store, export is where Django shines. The model layer with its friendly and powerful ORM works fine for this kind of relational data. With a bit of work, the admin can be configured so that you can view and edit the data.

Mostly “view” as the data is generally imported automatically. Which means you discover possible errors like “why isn’t this data shown” and “why is it shown in the wrong location”. With the right search configuration and filters, you can drill down to the offending data and check what’s wrong.

Import/export works well with custom django management commands, admin actions and celery tasks.

Now on to the front-end. With the basis being “select a subset” and then “view the subset”, I advocated a simple interface with a sidebar. All selection would happen in the sidebar, the main content area would be for viewing. And perhaps some view-customization like sorting/filtering in a table column or adjusting graph colors. This is the mockup I made of the table screen:

In the sidebar you can select a period, locations/location groups and parameters. The main area is for one big table. (Or a map or a graph component).

To quickly get a first working demo, I initially threw together three django views, each with a template that extended one base template. Storing the state (=your selection) as a dict in the django session on the server side. A bit of bootstrap css and you’ve got a reasonable layout. Enough, as Gijs said in his blog entry, to sell the prototype to the customer and get the functional design nailed down.

Expanding the system. The table? That means javascript. And in the end, reactjs was handy to manage the table, the sorting, the data loading and so on. And suddenly state started spreading. Who manages the state? The front-end or the back-end? If it is half-half, how do you coordinate it?

Within a week, we switched the way the site worked. The state is now all on the client side. Reactjs handles the pages and the state and the table and the graph and the map. Everything on one side (whether client-side or server-side) is handiest.

Here’s the current table page for comparison with the mockup shown above:

Cooperation is simple this way. The front-end is self-contained and simply talks to a (django-rest-framework) REST django backend. State is on the client (local storage) and the relevant parameters (=the selection) are passed to the server.

Django rest framework’s class based views came in handy. Almost all requests, whether for the map, the table or the graph, are basically a filter on the same data, only rendered/serialized in a different way. So we made one base view that grabs all the GET/POST parameters and uses them for a big django query. All the methods of the subclassing views can then use those query results.

A big hurray for class based views that make it easy to put functionality like this in just one place. Less errors that way.

Some extra comments/tips:

  • Even with a javascript front-end, it is still handy to generate the homepage with a Django template. That way, you can generate the URLs to the various API calls as data attributes on a specific element. This prevents hard-coding in the javascript code:

      <!-- React app renders itself into this div -->
      <div id="efcis-app"></div>
        // An object filled with dynamic urls from Django (for XHR data retrieval in React app)
        var config = {};
        config.locationsUrl = '{% url 'efcis-locaties-list' %}';
        config.meetnetTreeUrl = '{% url 'efcis-meetnet-tree' %}';
        config.parameterGroupTreeUrl = '{% url 'efcis-parametergroep-tree' %}';
        config.mapUrl = '{% url 'efcis-map' %}';
        window.config = config;
  • Likewise, with django staticfiles and staticfiles’ ManifestStaticFilesStorage, you get guaranteed unique filenames so that you can cache your static files forever for great performance.

Lessons learned?

  • Splitting up the work is easy and enjoyable when there’s a REST back-end and a javascript front-end and if the state is firmly handled by the front-end. Responsibility is clearly divided that way and adding new functionality is often a matter of looking where to implement it. Something that’s hard in javascript is sometimes just a few lines of code in python (where you have numpy to do the calculation for you).

    Similarly, the user interface can boil down complex issues to just a single extra parameter send to the REST API, making life easier for the python side of things.

  • When you split state, things get hard. And in practice that, in my experience, means the javascript front-end wins. It takes over the application and the django website is “reduced” to an ORM + admin + REST framework.

    This isn’t intended as a positive/negative value statement, just as an observation. Though a javascript framework like reactjs can be used to just manage individual page elements, often after a while everything simply works better if the framework manages everything, including most/all of the state.

Easy maintainance: script that prints out repair steps


Tags: python, django, nelenschuurmans

At my work we have quite a number of different sites/apps. Sometimes it is just a regular django website. Sometimes django + celery. Sometimes it also has extra django management commands, running from cronjobs. Sometimes Redis is used. Sometimes there are a couple of servers working together....

Anyway, life is interesting if you’re the one that people go to when something is (inexplicably) broken :-) What are the moving parts? What do you need to check? Running top to see if there’s a stuck process running at 100% CPU. Or if something eats up all the memory. df -h to check for a disk that’s full. Or looking at performance graphs in Zabbix. Checking our “sentry” instance for error messages. And so on.

You can solve the common problems that way. Restart a stuck server, clean up some files. But what about a website that depends on background jobs, run periodically from celery? If there are 10 similar processes stuck? Can you kill them all? Will they restart?

I had just such a problem a while ago. So I sat down with the developer. Three things came out of it.

  • I was told I could just kill the smaller processes. They can be re-run later. This means it is a good, loosely-coupled design: fine :-)

  • The README now has a section called “troubleshooting” with a couple of command line examples. For instance the specific celery command to purge a specific queue that’s often troublesome.

    This is essential! I’m not going to remember that. There are too many different sites/apps to keep all those troubleshooting commands in my head.

  • A handy script (bin/repair) that prints out the commands that need to be executed to get everything right again. Re-running previously-killed jobs, for instance.

The script grew out of the joint debugging session. My colleague was telling me about the various types of jobs and celery/redis queues. And showing me redis commands that told me which jobs still needed executing. “Ok, so how do I then run those jobs? What should I type in?”

And I could check serveral directories to see which files were missing. Plus commands to re-create them. “So how am I going to remember this?”

In the end, I asked him if he could write a small program that did all the work we just did manually. Looking at the directories, looking at the redis queue, printing out the relevant commands?

Yes, that was possible. So a week ago, when the site broke down and the colleague was away on holiday, I could kill a few stuck processes, restart celery and run bin/repair. And copy/paste the suggested commands and execute them. Hurray!

So... make your sysadmin/devops/whatever happy and...

  • Provide a good README with troubleshooting info. Stuff like “you can always run bin/supervisorctl restart all without everything breaking. Or warnings not to do that but to instead do xyz.
  • Provide a script that prints out what needs doing to get everything OK again.

Runs on python 3: checkoutmanager


Tags: python, django

Checkoutmanager is a five-year old tool that I still use daily. The idea? A simple ~/.checkoutmanager.cfg ini file that lists your checkouts/clones. Like this (only much longer):

vcs = git
basedir = ~/local/
checkouts =

vcs = svn
basedir = ~/svn/
checkouts =

In the morning, I’ll normally do a checkoutmanager up and it’ll go through the list and do svn up, git pull, hg pull -u, depending on the version control system. Much better than going though a number of them by hand!

Regularly, I’ll do checkoutmanager st to see if I’ve got something I still need to commit. If you just work on one project, no problem. But if you need to do quick fixes on several projects and perhaps also store your laptop’s configuration in git... it is easy to forget something:

$ checkoutmanager st

And did you ever commit something but forgot to push it to the server? checkoutmanager out tells you if you did :-)

Porting to python 3. The repo was originally on bitbucket, but nowadays I keep having to look all over my screen, looking for buttons, to get anything done there. I’m just too used to github, it seems. So after merging a pull request I finally got down to moving it to github.

I also copied over the issues and added one that told me to make sure it runs on python 3, too. Why? Well, it is the good thing to do. And... we had a work meeting last week where we said that ideally we’d want to run everything on python 3.

Two years ago I started a django site with python 3. No real problems there. I had to fix two buildout recipes myself. And the python LDAP package didn’t work, but I could work around it. And supervisord didn’t run so I had to use the apt-get-installed global one. For the rest: fine.

Recently I got zest.releaser to work on python 3 (that is: someone else did most of the hard work, I helped getting the pull request properly merged :-) ). For that, several test dependencies needed to be fixed for python 3 (which, again, someone else did). Checkoutmanager had the same test dependencies, so getting the test machinery to run was just a matter of updating dependencies.

What had to be done?

  • print 'something' is now a function: print('something'). Boring work, but easy.

  • Some __future__ imports, mostly for the print function and unicode characters.

  • Oh, and setting up testing. Very easy to get both python 2.7 and 3.4 testing your software that way. Otherwise you keep on switching back/forth between versions yourself.

    (There’s also ‘tox’ you can use for local multi-python-version testing in case you really really need that all the time, I don’t use it myself though.)

  • Some from six.moves import xyz to work around changed imports between 2 and 3. Easy peasy, just look at the list in the documentation.

  • It is now try... except SomeError as e instead of try... except SomeError, e. The new syntax already works in 2.7, so there’s no problem there.

  • The one tricky part was that checkoutmanager uses doctests instead of “regular” tests. And getting string comparison/printing right on both python 2 and 3 is a pain. You need an ugly change like this one to get it working. Bah.

    But: most people don’t use doctests, so they won’t have this problem :-)

  • The full list of changes is in this pull request: .

  • A handy resource is . Many common problems are mentioned there. Including solution.

    Django’s porting tips at are what I recommended to my colleagues as a useful initial guide on what to do. Sane, short advice.

Anyway... Another python 3 package! (And if I’ve written something that’s still used but that hasn’t been ported yet: feel free to bug me or to send a pull request!)

Buildout and Django: djangorecipe updated for gunicorn support


Tags: django

Most people in the Django world probably use pip to install everything. I (and the company were I work, Nelen & Schuurmans) use buildout instead. If there are any other buildout users left outside of zope/plone, I’d love to hear it :-)

First the news about the new update, after that I’ll add a quick note about what’s good about buildout, ok?

Djangorecipe 2.1.1 is out. The two main improvements:

  • Lots of old unused functionality has been removed. Project generation, for instance. Django’s own startproject is good enough right now. And you can also look at cookiecutter. Options like projectegg and wsgilog are gone as they’re not needed anymore.

  • The latest gunicorn releases didn’t come with django support anymore. You used to have a bin/django run_gunicorn (or python run_gunicorn) management command, but now you just have to run bin/gunicorn yourproject.wsgi. And pass along an environment variable that points at your django settings.

    With the latest djangorecipe, you can add a scripts-with-settings = gunicorn option and it’ll create a bin/gunicorn-with-settings script for you that sets the environment variable automatically. Handy!

Advantage of buildout. To me, the advantage of buildout is threefold:

  • Buildout is more fool-proof. With pip/virtualenv you should remember to activate the virtualenv. With buildout, the scripts themselves make sure the correct sys.path is set.

    With pip install something you shouldn’t forget the -r requirements.txt option. With buildout, the requirement restrictions (“versions”) are applied automatically.

    With pip, you need to set the django settings environment variable in production and staging. With buildout, it is just bin/django like in development: it includes the correct reference to the correct settings file automatically.

    There just isn’t anything you can forget!

  • Buildout is extensible. You can extend it with “recipes”. Like a django recipe that helps with the settings and so. Or a template recipe that generates an ngnix config based on a template with the django port and hostname already filled in from the buildout config file. Or a sysegg recipe that selectively injects system packages (=hard to compile things like numpy, scipy, netcdf4).

  • Buildout “composes” your entire site, as far as possible. Pip “just” grabs your python packages. Buildout can also build NPM and run grunt to grab your javascript and can automatically run bin/django collectstatic -y when you install it in production. And generate an nginx/apache file based on your config’s gunicorn port. And generate a supervisord config with the correct gunicorn call with the same port number.

Of course there are drawbacks:

  • The documentation is definitively not up to the standards of django itself. Actually, I don’t really want to point at the effectively unmaintained main documentation site at You need some experience with buildout to be able to get and keep it working.
  • Most people use pip.

Why do I still use it?

  • The level of automation you can get with buildout (“composability”) is great.
  • It is fool-proof. One bin/buildout and everything is set up correctly. Do you trust every colleague (including yourself) to remember 5 different commands to set up a full environment?
  • If you don’t use buildout, you have to use pip and virtualenv. And a makefile or something like that to collect all the various parts. Or you need ansible even to set up a local environment.
  • Syseggrecipe makes it easy to include system packages like numpy, scipy, mapnik and so on. Most pip-using web developers only need a handful of pure python packages. We’re deep into GIS and numpy/gdal territory. You don’t want to compile all that stuff by hand. You don’t want to have to keep track of all the development header file packages!

So... hurray for buildout and for the updated djangorecipe functionality! If you still use it, please give me some feedback at or in the comments below. I’ve removed quite some old functionality and I might have broken some usecases. And buildout/django ideas and thoughts are always welcome.

Formatting python log messages


Tags: python

I see three conflicting styles of log formatting in most of the code I come across. Basically:

import logging

logging = logging.getLogger(__name__)"%s went %s wrong", 42, 'very')"{} went {} wrong".format(42, 'very'))"%s went %s wrong" % (42, 'very'))

I looked at the official PEP 282 and at the official docs.

With the help of someone’s stackoverflow question I think I understand it now.

  • Use the first version of the three examples. So:

    • The actual log message with the old, well-known %s (and %d, %f, etc) string formatting indicators.
    • As many extra arguments as you have %s-thingies in your string.
  • Don’t use the second and third example, as both of them format the string before it gets passed to the logger. So even if the log message doesn’t need to be actually logged somewhere, the full string gets created.

    The first example only gets passed a string and some extra arguments and only turns it into a real full string for in your logfile if it actually gets logged. So if you only display WARN level and higher, your DEBUG messages don’t need to be calculated.

  • There is no easy way to use the first example with {} instead of %s.

So: use the"%s went %s wrong", 42, 'very') form.

(Unless someone corrects me, of course)

Djangocon sprint: zest.releaser 5.0 runs on python 3


Tags: djangocon, python

Good news! zest.releaser supports python 3.3+ now!

Now... what is zest.releaser? zest.releaser takes care of the boring release tasks for you. So: make easy, quick and neat releases of your Python packages. The boring bit?

  • Change the version number (1.2.dev0 -> 1.2 before releasing and 1.2 -> 1.3.dev0 after releasing).
  • Add a new heading in your changelog. Preferrably record the release date in the changelog, too.
  • svn/git/bzr/hg tag your project.
  • Perhaps upload it to pypi.

Zest.releaser takes care of this for you! Look at the docs on for details. The short page on our assumptions about a releasable python project might be a good starting point.

Now on to the python 3.3 support. It was on our wish list for quite some time and now Richard Mitchell added it, kindly sponsored by isotoma! We got a big pull request last week and I thought it would be a good idea to try and merge it at the djangocon sprint. Reason? There are people there who can help me with python 3.

Pull request 101 was the important one. So if you want to get an idea of what needs to be done on a python package that was first released in 2008, look at that one :-)

Long-time zest.releaser users: read Maurits’ list of improvements in 4.0, released just two weeks ago.

Running on python 3 is enough of a change to warrant a 5.0 release, though. Apart from python 3, the biggest change is that we now test the long description (what ends up on your pypi page) with the same readme library that pypi itself uses. So no more unformatted restructured text on your pypi page just because there was a small error somewhere. Handy. Run longtest to check it. If it renders OK, it’ll be opened in your webbrowser, always handy when you’re working on your readme.

Djangocon: React.js workshop - Maik Hoepfel


Tags: djangocon

He compares react.js to angular. Angular basically want to take over your whole page. Multiple apps on one page, mixed with some jquery, isn’t really possible. You also have to learn a lot of terminology. It is a full stack framework. And angular 2.0 won’t have a migration path from angular 1.x...

React.js only does one thing and does it well: the view. It doesn’t do data fetching, url routing and so.

React.js is quite pythonic. You can start adding little bits at a time to your code. You cannot really do anything wrong with it.

A core idea is that the html is in one corner. That’s the output. And the state is in the other corner. They’re strictly seperated, which makes it easier to reason about.

It is build by facebook. They take page load time seriously. You can even run react.js partially on the server side (“react native”, inside node.js), taking down rendering time, especially on mobile. Facebook is pretty deeply invested in it, so the chances of it disappearing are low. (As opposed to angular: google is barely using it itself).

When you start using react.js on a page, you’ll have to start thinking about components. An “unordered list of results”, a “search box”. Those kinds of components.

  • Component: basically a piece of html, handled by react. If react renders it, html comes out. It looks at its state and properties to render it (“it calls the .render() method”).

  • State: the current state of the component. The component can change this.

    If you can do without state: do it. You’re better off calculating it from something else. And if it makes sense to store the state on a higher-level component: do it.

    If a component’s state changes, you can assume that the render method gets called (it does some magic so that it only updates html if there are changes).

  • Properties: what you pass a component from the outside. Think parameters, like a list of products. The component cannot change these.

React.js code is quite simple to reason about as there’s a simple path through the code. If some method changes the state, react handles the rendering. The rendering might set up click handlers or other things. If such a click handler changes the state, react renders everything and so on and so on.

Integration with existing jquery plugins is generally possible, but you’d rather not do it. React.js tries to store state nicely in a corner and jquery plugins normally store state on the DOM itself. So watch out a bit. You might have to build some more custom html than you’d have to do otherwise as you’d just download one of the existing jquery plugins.

A note about facebook’s flux. “A flux” is just a structure for building large applications. There are multiple ones. He hopes you don’t have to build such a large application yet, as the flux you’re choosing might not be the one that wins out in the end.

If you need a good example of django and react.js, look at django-mediacat.

Question: “How do you integrate with django? Does django still render forms?” Answer: normally, you’d decouple the front-end and back-end, at least for the bits you let react.js handle. So perhaps generate some javascript variables in the django template. And grab the rest via an REST API.

Some clarification to the previous paragraph: you can choose yourself how much of the page you let react.js handle. He himself is using react.js to handle a couple of interactive elements on an otherwise-static django-rendered page.

Question: “Are you happy with react.js?”. Answer: “YES!”. It is quite small in scope. It doesn’t get in your way. Easy to get started with. You can’t do an awfully lot wrong (he has seen terrible examples with angular: it gives you much more rope to hang yourself with). You can progressively add interactivity to your page.

Tip: keep your react.js components small. Only let it handle spitting out html. Do any calculations you might have to do on the server side, behind your REST API. logo

About me

My name is Reinout van Rees and I work a lot with Python (programming language) and Django (website framework). I live in The Netherlands and I'm happily married to Annie van Rees-Kooiman.

Weblog feeds

Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):