Before thinking about autospacing and autokerning, to define, understand, and then use the ‘*white space’ structure* it’s necessary to build a *theory* starting from its function. As we have seen the white space manages the instances of *proximity* and *rhythm*. And the same way these instances have independent and sometimes opposite requirements, the medium described by the mathematical model must be characterized the same dual nature. The *proximity* model expects the existence of a mechanism that tends to move closer far points and to move away near points. This way the white space can be seen like a medium endowed with not linear elasticity, distorted by the geometries of the letters’ outlines that, being constraints, act as boundary conditions. In case of autokerning we have two facing outlines. In case of autospacing the *other* outline is *virtual* and modeled as distortion opportunely imposed on the medium itself. From a conceptual point of view there’s little difference. In an elastic field deformations induce tensions: they generate forces applied on the outlines. Forces that have to be balanced. The disposition of the lines of force discriminates inactive and active zones. Inactive zones are those portions of white space, like the inside of a C, thare left *embedded* in the concavities of the outlines and are in effect blank space inside the letters.

Coming back to the definition of *rhythm* as a regular sequence of reference points that marks the linear movement of the eye, if the white spaces are the *visual beats* of a stream of letters then they must somehow have some sort of constancy and invariance. It’s therefore clear that it’s not just a matter of simple geometry and that something else needs to be measured too. In iKern this is made possible by introducing the concept of *variable density*, as referred to an *optical space* not coincident with the geometric one anymore; with relations between its quantity and distribution. The density levels are not generally uniform but vary from point to point depending on the boundary shapes: the glyph outlines. This distribution defines preferential settlement directions along which a *redistribution* takes place as specific balancing method. This kind of medium is not solid anymore: it’s more similar to the *air* that flows and acts as a whole*. *Redistribution can lead to a space geometric expansion or contraction, and therefore result in a *displacement* (variation in sidebearing or kerning). Or it can induce an accumulation or loss of density in absence of movement. The capacity of a line to balance in itself variations of density by means of its curvature, that could be called *shape inertia,* links the density and geometry fields. The stem/round modulation in sans serif typefaces, for example, is all here. Two consecutive vertical straight lines (two stems; let’s say two ‘H’) can freely relatively move because they don’t induce disuniformity in the density field so no further steady redistribution is possible: they have no inertia; everything’s geometry. On the other hand two consecutive rounds (let’s say two ‘O’) are the immutable measure around which all the letterfitting is based: moving two ‘O’ means need to change everything else.

Sometimes density resistributions can create stable configurations far away from neutral states and this is what gives the white space its proteiform ability. For example: the low density equilibrium configurations that may result when outlines that bring a lot of space around them without being essentially concave (is the case of an ‘L’ or a ‘T’) interact. They describe, in practice, a behaviour, that could be defined *bodiliness*, where the eye interprets as belonging to the letter part of the surrounding white space; which therefore can’t be invaded.

Density field and tensional state go beyond a simple geometric description introducing the possibility of different kinds of equilibrium and equivalences. The iKern model foresees the existence of a range of acceptable configurations which may be broad or narrow depending on the case. Mathematically it means that the variables are more than the equations, so the system is indetermined and, consequently, inherently *dynamic*. In this sense, the modeling captures the full essence of adaptive capacity the white space needs to fulfill its duty. The *proximity model* and *air model* interact in a such a way that the found solutions minimize the global *energy* level. The **total system energy minimization **is what the laws of nature require in any physical system; so it’s not a surprise it’s here too. Modeling is, in the end, the creation of a similitude of something else: I don’t know of what, in this case; I can only imagine. But I think it’s something potentially *true*.

From theory to practice: the iKern algorithms, based upon this theory, perform the analytical tasks underlying the synthetic work and traslate what is a general method relative to general shapes into something more specifically typographic. The dual nature of the underlying model provides a sufficient level of depth and fidelity in the reproduction of a wide range of behaviours that distinguish each type of shape. Depending on the design and type of letterfitting, more or less tight, all this may not be enough: here comes the need to strike a compromise between *proximity*, *rhythm* and *pattern*. Programming the iKern engine means to try and look at combinations of letters by finding a way to get the best visual result with minimal visual sacrifice. One of the standard combinations I always use is ALVAR AALTO, the name of the famous Finnish architect, because it contains a lot of triplet based on glyphs showing strongly opposite behaviours. It’s good to understand that not all the combinations of letters can always be solved in a perfect way; it’s useful to see how that little bit of legibility that is lost when two consecutive ‘A’ overlaps, for example, is often nothing compared to the benefits of a wider balancing; it is useful to understand that, ultimately, typography is *architecture*.