Artificial intelligent assistant

C.C. Chang’s Explanation of Generated Submodels I have no issue with Chang and Keisler’s definition of “submodels generated by...” but I’m extremely confused with how they go on to define the universe of the certain submodel $B$ generated by $X$ (a nonempty subset of $A$, a model for the language $L$). They state the following: $$B = \\{t[x_1,\dots,x_n] : t\text{ is a term of $L$ and }x_1,\dots,x_n\in X\\}$$ Why must this be so?

Let $S=\\{t[x_1,\dots,x_n] : t\text{ is a term of $L$ and }x_1,\dots,x_n\in X\\}$. Clearly any submodel $C$ of $A$ that contains $X$ must also contain $S$, since every element of $S$ is obtained by repeatedly applying operations starting from elements of $X$ and $C$ must be closed under the operations. (More formally, you can prove by induction on terms that each element of $S$ is in $C$.) So, $S\subseteq B$.

To prove the reverse inclusion, simply observe that $S$ itself is a submodel of $A$: it is closed under all the operations since if you apply an operation to some terms you just get a bigger term using the function symbol corresponding to the operation. So $S$ is a submodel of $A$ that contains $X$, $B\subseteq S$.

xcX3v84RxoQ-4GxG32940ukFUIEgYdPy 7123897bfbe1e4376db6795bcde9fd6e