Usecase: they want to use python to connect a hospital to the cloud to upload radiology images securely. Imagery of patients, so it has to be really secure. And in their case also certified.
Radiology images are normally in the “DICOM” format. There’s a special dicom protocol to transfer images, both for sending and getting images. A computer that implements the protocol is a “dicom node” and can be used by many applications.
The image format is mostly jpeg (actually a set of images: cross-sections of the brain, for instance) plus metadata.
They have an AI program to help diagnose images, with a microservices based architecture using docker+django. It used to run inside the hospital on their network. They wanted to move the AI server to the cloud. But how to get that working while still retaining their certification? This means that the actual certified AI server program cannot be changed.
The solution was to use an open source DIOCM node software called Orthanc. With a nice REST interface, so usable over https. They added one instance inside the network and one in the cloud, next to the AI server.
Dicom node can send/recieve to/from another dicom node, but how to manage it? This is where they used python. A daemon on both sides that monitores the REST api and the local orthanc databases and triggers the necessary “gets” and “sends”. Also the sending/getting from the orthanc node to the actual node where the orthanc is placed next to.
Python helped them a lot. Requests, threading, abstract base classes, logging… everything is there. And with https and client certificates, the security is good. The node on the hospital side only does get and send, so nothing needs to be send to the hospital, as requesting an open port would be very hard :-)
Deployment is with docker in the cloud. In the hospitals, requesting a windows machine is easiest in practice, so they packaged the small orthanc node up as an executable that can be run as a service.
They slowly learned how to get a python app to run fine in kubernetes, including getting your local development environment to match the server environment as much as possible. And… making it as comfortable to work with as possible.
He demoed a small app build with fastapi as an example.
When using docker, there’s an extra layer between you and your code. If you change code, you need to rebuild your docker image. Or you need to mount the dir with your code into your running docker. How to manage that? And you also want to make it resemble your production environment.
One step closer to production is to use kubernetes locally. He uses k3d to run a local kubernetes cluster: it runs kubernetes in your local docker daemon! Easy to run. It also includes a local docker registry, which you can push your image to.
For actually deploying the docker, he uses “helm”, kind of a package manager for docker.
What really helps getting development working nice with kubernetes: tilt. Smart rebuilds, live updates, automatic docker
rebuilding. His demo “tiltfile” even included an automatic
pip install -r
requirements if the requirements change.
Note: a tiltfile looks like a python file, but it is a “go” configuration file format…
He’s working on https://grand-challenge.org/ , an open source project for AI and medical imaging. You can submit imagery (“a challenge”) that can then be evaluated with various algorithms. You can test your algorithms this way.
Techs they use: django, celery, vuejs, htmlx, pydicom and many more. And it is containers everywhere with a multi-region deployment on AWS for really low-latency image access from the browser.
The platform is extensible with container images running, for instance, specific algorithms. That’s a security challenge, as they’re written by lots of people. And you’ve got to manage the allowable resources.
In AWs there are lots of ways to run your containers. The three most used ones:
AWS app runner. Cheap and easy. Good if your apps have more than 25% idle time.
Amazon kubernetes, EKS. For his taste, it is way too complex. “Yaml hell”.
Amazon elastic container service, ECS. It is AWS’ opinionated way to run containers at scale. You have much less to understand: it mostly looks and handles like docker-compose.
You can use AWS fargate to run tasks on either EKS or ECS. Their software allows them to use “spot instances”, instances that are cheap because they might get killed.
They use quite some AWS services. Simple queue service, eventbridge, elastic file storage. So they bought into the AWS ecosystem. But, as the actual software is all open source, they can move to other providers with some effort.
He now works for a firm that builds 3d printing robots. 3d stuff that’s up to 40 meter long :-) He’s trying to use python to manage/steer them.
His current experiments are with https://pypi.org/project/roboticstoolbox-python/, which you can use comfortably in a jupyter notebook, including a 3d visualisation. He showed a nice demo.
My name is Reinout van Rees and I work a lot with Python (programming language) and Django (website framework). I live in The Netherlands and I'm happily married to Annie van Rees-Kooiman.
Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):