# Symbolically Reasoning on Web Pages and Refactoring

How do we appropriately represent **Web pages** for **semantic analysis** and particularly for **refactoring** purposes? In a previous post at this same place, we introduced some of the main principles of what we have called semantically driven rational refactoring of Web pages as a general framework of **symbolic reasoning**. At this post, we want to develop some preliminary but formally appealing refinement concerning page representation; we assume our previous post has been read and we put ourselves in its context: development of **automated tools** pointing to the task of refactoring as a goal.

We claim that many valuable derivations can be obtained by following a semantic approach to the handling of web pages those of which are both of conceptual and implementation nature and interest; we believe looking with more detail at such derivations may provide us with more evidence for sustaining our claim on semantics in the context of refactoring. In this post, we want to consider one very specific case of conceptual derivation with respect to the representation model. Specifically, we want to present some intuitive criteria for analyzing theoretically a symbolic interpretation of the popular **k-means** **clustering** scheme, which is considered useful for the domain of web pages in more general contexts.

Let us first and briefly remain why that is relevant to refactoring, to set up a general context. We recall that refactoring is often related to code understanding; hence building a useful **semantic classification** of data, in general, is a first step in the right direction. In our case web pages are our data sources, we want to be able to induce semantic patterns inside this kind of software elements, in a similar way as search engines do to classify pages (however, we had to expect that specific semantic criteria are probably being more involving in our refactoring case where, for instance, procedural and implementation features are especially relevant; we will expose such criteria in future posts).

In any case, it is natural to consider clustering of pages according to some given **semantic criteria** as an important piece in our refactoring context: Clustering is essence about (dis)**similarity** between data, is consequently very important in relation to common refactoring operations, especially those ones aiming at **code** **redundancy**. For instance, two similar code **snippets** in some set of web pages may become **clustered** together, allowing by this way an abstraction and refactoring of a "**web control**" or similar reusable element.

As several authors have done in other contexts, we may use a **symbolic representation** model of a page; in a general fashion pages can naturally be represented by means of so-called **symbolic objects**. For instance, an initial approximation of a page (in an **XML** notion) can be something of the form given below (where relevant page attributes (or **features**) like "title" and "content" can be directly extracted from the **HTML**):

<rawPage

title="**Artinsoft** home page"

content="**ArtinSoft**, the industry leader in **legacy** application **migration** and **software** upgrade technologies, provides the simple path to **.NET** and **Java** from a broad range of languages and platforms, such as Informix 4GL and Visual Basic."

.../>

Because this representation is not as such effortless for calculation purposes, attribute values need to be **scaled** from plain text into a more structured basis, for clustering and other similar purposes. One way is to translate each attribute value in numerical frequencies (e.g. **histogram**) according to the relative uses of normalized keywords taken from a given vocabulary (like "software", "legacy", "java", "software", etc) by using standard **NLP** techniques. After scaling our page could be probably something similar to the following:

<scaledPage

title ="artinsoft:0.95; software:0.00; java:0.00; "

content="artinsoft:0.93; software:0.90; java:0.75; " .../>

The attribute groups values are denoting keywords and its relative frequency in page (data is hypothetic in this simple example), as usual. Notice the attribute group is another form of symbolic object whose values are already numbers. In fact, we may build another completely flat symbolic object where attribute values are all scalar, in an obvious way:

<flatPage

title.artinsoft="0.95" title.software="0.00" title.java="0.00"

content.artinsoft="0.93" content.software="0.90" content.java="0.75" .../>

Now pages that are by this way related to some common keywords set can be easily compared using some appropriate numerical distance. Of course, more complex and sophisticated representations are feasible, but numerically scaling the symbolic objects remains as an important need at some final stage of the calculation. However, and this is a key point, many important aspects of the computational process can be modeled independently of such final numerical representation (at the side of the eventual explosion of low-level attributes or poorly dense long vectors of flat representations, which are not the point here).

Let us develop more on that considering the **k-means** clustering scheme but without committing it to any specific numerically valued scale, at least not too early. That is, we want to keep the essence of the algorithm so that it works for "pure" symbolic objects. In a simplified way, a (hard) k-means procedure works as follows:

1.Given are a finite set of objects D and an integer parameter k.

2.Choose k objects from D representing the initial group of cluster **leaders** or **centroids**.

3.Assign each object in D to the group that has the closest centroid.

4.When all objects in D have been assigned, recalculate the k centroids.

5.Repeat Steps 2 and 3 until the centroids no longer change (move).

Essential parameters to the algorithm are 1) the **distance** function (of step 3) to compute the closest object and some form of operation allowing "**object combination**" in order to produce the centroids (step 4) (in the numerical case "combination" calculates the mean vector from vectors in cluster).

Such a combination needs to be "consistent" with the distance to assure the eventual termination (not necessarily global optimality). We pursue to state a formal criterion of consistency in an abstract way. Fundamental for assuring **termination** is that every assignment of one object X to a new cluster occurs with lower distance with respect to the new cluster leader than the distance to the previous clusters leaders of X. In other words the assignment of X must be a really new one. Combine operation after assignment (step 4) must guarantee that in every old cluster where X was, their leaders still remain more distant after future recalculations. So we need abstract definitions of distance and object combination would satisfy these invariant, that we call **consistency**.

To propose a distance is usually easier (in analogy to the **vector model**), objects are supposed to belong to some class and the feature values belong to some space or domain provided with a metric. For instance, an attribute like "title" takes values over a "String" domain where several numerical more o less useful metrics can possibly be defined. For more structured domains, built up from metric spaces, we may homomorphically compose individual metrics of parts for building the metric of the composed domain. For a very simplified example, if we have two "page" objects X and Y of the following form:

X= <page>

<title>DXT</title>

<content>DXC</content>

</page>

Y= <page>

<title>DYT</title>

<content>DYC</content>

</page>

Where we have that DXT and DYT are eventually complex elements representing symbolic objects over a metric domain DT provided with distance dist_DT and similarly DXC, DYC comes from a metric space DC with metric dist_DC. We may induce a metric dist_page for the domain “page”, as linear combination (where weights for dimensions could also be considered):

dist_page(X, Y) = dist_DT(DXT, DYT) + dist_DC(DXC, DYC)

Using the simple illustration, we see that we may proceed in a similar fashion for the combine_page operation, assuming operations combine_DT and combine_DC. In such a case, we have:

combine_page(X,Y)= <page>

<title>combine_DT(DXT,DYT)</title>

<content>combine_DC(DXC,DYC)</content>

</page>

It is straightforward to formulate the general case, the principle is simple enough. It is worth to mention that "combine" is clearly a kind of abstract **unification** operation; in general, we claim has to be consistent with the "distance" function (at every domain); further requirements rules this operation, in particular, that the domain needs to be **combination-complete** or **combination-dense**.

Proving consistency strongly relies on abstract properties like the following:

dist_D(X, Y) >= dist_D(combine_D(X,Y), Y)

In other words, that "combine" actually comprises according to distance. Fortunately, by using homomorphic compositions of consistent domains, we recover consistency at the composed domain. At the scalar level, for instance with numbers, using average as "combination" operation easily assures consistency.