Different Results with Same Resume and Job Description

You may get slightly different results when you run an evaluation on the same resume and the same job description. This is a natural part of generative AI. Why does this happen? Lets shed some light on the underlying mechanics and reasoning.

Stochasticity

The primary reason behind the variability in responses is stochasticity or randomness. GPT models output involves a degree of randomness to select the next word in a sequence. The model calculates the probability of each word being the next suitable word, but doesn't always pick the most probable one. This randomness ensures a variety of responses and prevents the model from getting "stuck" in a predictable and monotonous loop.

Model’s Objective

The main goal of large language models (LLM), is to produce text that is coherent, relevant, and diverse. If the model always returned the same answer for a given input, users might find it less useful in brainstorming sessions or for generating creative content. By introducing variability, the model becomes a more versatile tool that can present different perspectives or angles to the same question.

Lack of Fixed Determinism

LLMs don't have a fixed set of deterministic rules to follow. They're not databases that return the same output for the same input. Instead, they're closer to being creative machines, trained to generate human-like text based on vast amounts of data they've been exposed to. This non-deterministic behavior is a part of their design.

A Reflection of Human Nature

Imagine asking a friend the same question on different days or even different times of the day. Depending on their mood, recent experiences, or thoughts at the moment, their answer might slightly vary. While an LLM doesn't have feelings or moods, its variable responses mirror the diverse ways humans might respond to the same stimuli.

Conclusion

The variable outputs of LLM generative AI make it an intriguing and versatile tool. While it might seem puzzling at first, understanding the reasons behind the differences can enhance one's appreciation for the complexities of the model. It serves as a reminder that, much like human conversation, AI-generated content can be dynamic, fresh, and ever-evolving.