1.4 How LLMs Consume Annotations

When a user submits a prompt to an AI-powered development tool (such as Select AI or SQL Developer), it first augments the prompt with additional context.

Before forwarding the prompt to an LLM, the tool retrieves annotations for all the database objects involved in the user's request. It then programmatically injects these annotations into the system instruction part of the prompt for LLM's consumption.

This process enriches the prompt with detailed contextual information that guides the model's response. For example, the final instruction sent to the LLM may contain the following information:

"You are a SQL generator. The schema contains a table named EMPLOYEE. DESCRIPTION: Current and former employees, including contractors. The column EMP_ID has the following aliases: employee number, worker id, person number. The column DEPT_NO is often joined with DEPARTMENT.DEPT_NO."

Because the prompt includes all necessary context, the model no longer needs to infer relationships based solely on object names. As a result, it can generate significantly more accurate SQL queries.