In mathematics, an embedding (or imbedding) is one instance of some mathematical structure contained within another instance, such as a group that is a subgroup. When some object is said to be embedded in another object , the embedding is given by some injective and structure-preserving map .

Thus speaks Wikipedia. And so we must insist on being isomorphic to in whatever category we are dealing with; otherwise there is no reason to say that contains in any way. This is why geometers distinguish immersions from embeddings, and note that even an injective immersion may fail to be an embedding.

And yet, people speak of the Sobolev (Соболев) embedding theorem and its relatives due to Morrey, Rellich, Кондрашов… The common feature of these theorems is that one normed space is set-theoretically contained in another normed space , and the inclusion map is continuous (or even compact in the case of Rellich-Кондрашов). But this inclusion map is not an isomorphism; the image of inside of may look nothing like itself. For instance, the Sobolev theorem says that, in the two-dimensional setting, the space of functions with integrable gradient () is “embedded” into . Even though there is nothing inside of that looks like .

Let’s consider a much simpler example: the space of absolutely summable sequences is contained in the space of square-summable sequences . This inclusion is a continuous, indeed non-expanding map: because . But there is no subspace of that is isomorphic to , isomorphisms being invertible linear maps. This can be shown without any machinery: Suppose is a linear map such that for some constant , for all . Consider the unit basis vectors , i.e., , etc. Denote . For any choice of signs, and, therefore,

.

On the other hand, taking the **average ** over all sign assignments in kills all cross terms, leaving us with , which is . As , we obtain a contradiction between the latter inequality and (*).

In a Hilbert space, the sum of squared diagonals of a parallelepiped is precisely controlled by the square of sums of its sides. In for the diagonals may be “too long'', but cannot be ''too short''. In for it's the opposite. (And in anything goes, since it contains an isometric copy of every .) Further development of these computations with diagonals leads to the notions of **type ** and **cotype ** of a Banach space: type gives an upper bound on the length of diagonals, cotype gives a lower bound.