Artificial intelligent assistant

What is the relationship between oom_score and badness? Whilst reading both < and < I have come across the terms `oom_score` and badness. Both numbers have the same basic meaning; the higher they are, the more likely the associated task is to be OOM-killed when the host is under memory pressure. What is the relationship (if any) between the two numbers? EDIT: My guess is `oom_score` = max(badness + `oom_score_adj`, 0) but I haven't found any proof

It looks like it is:

> oom_score = badness * 1000 / totalpages

based on the kernel code <


static int proc_oom_score(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *task)
{
unsigned long totalpages = totalram_pages + total_swap_pages;
unsigned long points = 0;

points = oom_badness(task, NULL, NULL, totalpages) *
1000 / totalpages;
seq_printf(m, "%lu\
", points);

return 0;
}

xcX3v84RxoQ-4GxG32940ukFUIEgYdPy 669a5aab133f6173ab471d848469ec40