Problem: apply a lookup-table based map to a crapton of data Toy example: 363k lines, 19M file python3.8, df.apply: 34s pypy3, df.apply: 65s (!!?!) Why is pypy3 slower than python
In an attempt to run my code faster I thought pypy would be just the job. However, I am finding that it is actually…stackoverflow.com python3.8, mapply, n_workers=-1: hopeless max_chunks_per_worker=2, slightly less hopeless, but still hopeless