Python multiprocessing OOM handling

Ankush Jain
1 min readJul 22, 2022

Situation: want to join 500 dataframes against a large dataframe.

multiprocessing.Pool seems to keep getting stuck. what’s happening:

  1. subprocess is being killed by linux OOM killer (check syslog)
  2. multiprocessing doesn’t handle it well.

Another unrelated problem with multiprocessing:

If a task raises an exception, it will be thrown on the return queue, and not raised instantly. After all tasks have executed (failed), the return queue will be consumed. Wasting your time. x_x

Solution: use concurrent.Futures.ProcessPoolExecutor instead.

with ProcessPoolExecutor(max_workers=4) as e:
futures = { e.submit(worker, args): args for args in all_args }
for future in as_completed(futures):
try:
data = future.result()
print(data)
except Exception as e:
print(e)
traceback.print_exc()

This will raise a BrokenProcessPool exception if a subprocess is OOM’ed.

--

--