(One of my summaries of the May 2023 Dutch PyGrunn conference).
You might get the question “why do you use python, isn’t that a slow language”? Well, it is slower than many other languages, but not that slow. You should also look at programmer productivity. You wouldn’t write an operating system in python. You also wouldn’t write a web framework in C.
Python is dynamically typed instead of statically typed. This makes it easier for humans to understand and quicker. It makes sure you don’t have to deal with many low-level details. But it can cause errors.
Python is interpreted instead of compiled. If you compile code, it turns your program into machine code. The compiler has nothing further to do with it. With an interpreter, the interpreter is actually running your code when you run it. So the interpreter can handle all the OS-specific optimizations. “Live”. There’s byte code caching, so it’s not that it does double work.
In a compiled language, variables are stored in a fixed location in
memory. If you give a variable a new value, that new value is stored in the
same memory location. In python, everything is stored as a PyObject
with
a type
the value
a reference count
If you assign a new value to a variable name, a new PyObject is created. This can have a different “type”. The old PyObject isn’t referenced anymore, so the reference count is set to zero. It can then be garbage collected.
Python has to do more work, so it is less efficient than a compiled language. Creating PyObjects and names. It also takes quite some more memory due to the PyObject “wrapper”.
The notorious GIL
is the Global Interpreter Lock. It ensures thread safe
memory access: only one thread executes byte code at the same time. This is
especially needed to keep the reference count accurate.
GIL has an exception for I/O: that can happen concurrently. It is the CPU that governed by the GIL.
Important to keep in mind: python needs to work a bit harder to enable us to do less work: performance versus productivity. Some generic comments:
If you have a lot of I/O: threads. A lot of CPU: multiprocessing.
Watch out with loops where lots of PyObjects need to be created.
Use built-ins as much as possible. List comprehensions instead of a loop, for instance.
The same with numpy and pandas built-ins. Applying a lambda to all elements is waaaaaay slower than applying the function to the whole numpy array at the same time where it happens effectively inside numpy.
If a specific small piece of code is slow, you can try extending
python. “pyx”, compiled python code. cy_types
is compiled python with
extra type hints. You can also go to c or rust code.
Numba, dask, ray, pypy are alternatives you can look at.
My name is Reinout van Rees and I work a lot with Python (programming language) and Django (website framework). I live in The Netherlands and I'm happily married to Annie van Rees-Kooiman.
Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):