Home Dental Radiology Caries detection enhancement using texture feature maps of intraoral radiographs

Caries detection enhancement using texture feature maps of intraoral radiographs

by adminjay

Data collection

This retrospective work involved the use of 10 anonymized images (periapical radiographs) of different patients admitted to our dental clinic in 2017. The areas containing suspected lesions were analyzed. All radiographs were obtained under identical conditions.

The patients’ charts included cases of clinically (intraoperatively) confirmed superficial and medium caries as well as one case of deep caries taken from a local picture archiving and communication system, all of which were retrospectively studied. The inclusion criteria were limited visibility in the entry DIR images and the presence of photographic intraoral documentation (Fig. 1).

Fig. 1

Photographs captured with intraoral digital camera before and after cavity preparation

The texture feature maps were analyzed independently by two radiologists with experience in reading dental radiographs. The interobserver correlation between the readers was determined. The following features of changes in the radiographic images were examined: the tissue contrast with a special focus on the caries-affected area, the content of the evaluated shape, the caries outlines, and the sharpness of the margins. The obtained radiological results were verified by a dental specialist involved in the treatment of the evaluated cases.

The DIR images were obtained using a dental X-ray system (Kodak CS 6500; Carestream Dental, Atlanta, GA, USA) at an image quality of 70 kVp, current of 7 mA, mean exposure time of 0.05 s, pixel spacing of 0.018 mm, and 12-bit resolution corresponding to the radiodensity. A Digital Imaging and Communications in Medicine lossless system was used for file storage.


The computer vision domain provides many powerful tools for the recognition of image content, including an approach based on image texture analysis, which allows segmentation of regions of interest by statistical analysis of the image pixel neighborhood. Additionally, clustering or quantization methods can be utilized to automatically merge regions of a given signal range by additionally taking their spatial positions into account.

In the present study, texture feature maps were generated by several methods. This approach significantly differs from the one-feature-per-image calculations widely used in the literature [38, 39]. Instead of computing a single feature describing the entire image, feature values were calculated for each pixel position. For this purpose, a moving window with a reasonably smaller size than the image resolution is typically selected. In our simulations, a square window with a resolution of 21 × 21 pixels was employed. The window was centered over the pixel of interest and then moved along the image coordinates to determine the feature values for all pixels in the input image. Most procedures described below were performed in this manner to produce feature maps with a number equal to the number of parameters. All proposed methods were implemented in the Matlab® 2016 environment (MathWorks, Natick, MA, USA).

First-order features

An image histogram describes the probability of detection of a particular intensity within the data, and its shape can be used to determine important image parameters including sharpness, contrast, and number of objects. First-order features (FOF) are a formalized representation of such information. For image I with spatial resolution W × H and intensity range G, the normalized histogram is defined by the following equation:

$$Hleft( i right)=frac{1}{{WH}}mathop sum limits_{{x=1}}^{W} mathop sum limits_{{y=1}}^{H} left{ {begin{array}{*{20}{c}} 1&{Ileft( {x,y} right)=i} \ 0&{{text{otherwise}}} end{array}} right}~~~~~~i~epsilon left[ {0..G – 1} right]$$

This equation can be used to derive six different features (mean, variance, skewness, kurtosis, energy, and entropy) as follows:

$${text{FO}}{{text{F}}_{{text{mean}}}}=frac{1}{{WH}}mathop sum limits_{{x=1}}^{W} mathop sum limits_{{y=1}}^{H} Ileft( {x,y} right)$$

$${text{FO}}{{text{F}}_{{text{variance}}}}=frac{1}{{WH}}mathop sum limits_{{x=1}}^{W} mathop sum limits_{{y=1}}^{H} {left( {Ileft( {x,y} right) – {text{FO}}{{text{F}}_{{text{mean}}}}} right)^2}$$

$${text{FO}}{{text{F}}_{{text{skewness}}}}=frac{1}{{WH}}mathop sum limits_{{x=1}}^{W} mathop sum limits_{{y=1}}^{H} {left( {Ileft( {x,y} right) – {text{FO}}{{text{F}}_{{text{mean}}}}} right)^3}{left( {sqrt[2]{{{text{FO}}{{text{F}}_{{text{var}}}}}}} right)^{ – 3}}$$

$${text{FO}}{{text{F}}_{{text{kurtosis}}}}=frac{1}{{WH}}mathop sum limits_{{x=1}}^{W} mathop sum limits_{{y=1}}^{H} left{ {{{left( {Ileft( {x,y} right) – {text{FO}}{{text{F}}_{{text{mean}}}}} right)}^4}{{left( {sqrt[2]{{{text{FO}}{{text{F}}_{{text{var}}}}}}} right)}^{ – 4}}} right} – 3$$

$${text{FO}}{{text{F}}_{{text{energy}}}}=mathop sum limits_{{i=1}}^{G} H{left( i right)^2}$$

$${text{FO}}{{text{F}}_{{text{entropy}}}}= – mathop sum limits_{{i=1}}^{G} Hleft( i right){text{~}}log left( {Hleft( i right)+varepsilon } right)$$

Here, the value of ε is relatively small to ensure that the logarithm function is not computed for 0. To obtain images for these features, their values were calculated by the moving window method with a spatial dimension of 21 × 21 pixels.


Another method that effectively highlights the image content is k-means clustering (CLU), which takes the pixel intensity into account. A previous study showed that the basic analysis of image intensities allows segmentation of medical objects [40]. In this technique, the number of search clusters must be defined (their initial values are initialized randomly). Each pixel of the image is then assigned to the nearest cluster using a distance metric (in this study, the Euclidean metric was selected). The new cluster value is set to the average value of the intensities of all its pixels, and the procedure is repeated until the cluster value becomes constant or the number of iterations reaches a specified threshold. Although this method demonstrates satisfactory performance, it also has several drawbacks. First, the number of clusters must be known a priori. Second, the colors used in the cluster images are assigned randomly. Finally, the random initialization results in different outcomes for the same input, and the method is computationally expensive due to its iterative nature.

Therefore, a special quantization procedure that can solve several clustering problems has been proposed. It divides all intensities of the image pixels into a given number of colors, but uses only the intensity information as a guide. While the previous method searches for the cluster center in the high-probability region of the histogram, equal spacing between colors is applied during quantization. As a result, the number of colors is identical to that in the previous technique, whereas the division varies slightly. The described method solves all of the above-mentioned problems except that the number of colors still must be set a priori.

Gray level co-occurrence matrix

The GLCM [41] contains information on the spatial relations between the adjacent pixels in the texture. It is calculated as a matrix with entries that represent the probabilities of the coexistence of two gray tones next to each other. The distance between the analyzed pixels is used as a parameter (its value was set to 1 in this study). To eliminate the influence of texture rotation, it is reasonable to compute four different matrices at angles of 0°, 45°, 90°, and 135° in the adjacency direction and calculate their sum before feature calculations. Moreover, to facilitate the computation procedure, the input image is quantized to a lower number of gray levels to reduce the matrix size (in our experiments, the images were quantized down to 32 colors). It is possible to extract the following 14 features from this matrix: angular second moment, contrast, correlation, variance, homogeneity, sum average, sum variance, sum entropy, entropy, difference variance, difference entropy, two informational measures of correlation, and maximal correlation coefficient.

Gray tone difference matrix

The texture described in the context of human texture perception is defined by the gray tone difference matrix (GTDM) [42]. Its column vector contains G entries, which represent the difference between the pixel intensity level I and the average illumination (bar {I}) in a small K × K neighborhood described as follows:

$${text{GTDM}}left( i right)=~mathop sum limits_{{x=1}}^{W} mathop sum limits_{{y=1}}^{H} left| {i – bar {I}} right|~~{text{where}}~~bar {I}=frac{1}{{M – 1}}mathop sum limits_{{m= – K}}^{K} mathop sum limits_{{n= – K}}^{K} Ileft( {x+m,y+n} right),left( {m,n} right) ne left( {0,0} right)$$

Here, M = (2K + 1)2. When a given illumination does not exist or the image has only one color, the corresponding entries are equal to zero. It is possible to derive five features from this matrix: coarseness, contrast, busyness, complexity, and strength.

Run-length matrix

Another technique for texture characterization is the run-length matrix (RLM) method, which assumes that a texture of good quality is complex and can, therefore, be defined by rapid changes in illumination (pixel values), whereas the coarse texture is expressed by the larger sections of similar color [43]. This information is encoded in a matrix, the entries of which store the probabilities of the occurrence of a selected illumination value across a number of adjacent pixels. Because of the difficulty in defining the maximum length of the color run, it is acceptable to reduce the maximum length down to 10 and place all longer runs in the same matrix entry. Furthermore, for computational reasons (as in the case of GLCM), the gray levels in the image are quantized to a smaller number of intensities (32 in this study). To eliminate the rotation of input data, the matrix was computed at 0° and 90° and summed before feature calculation. In this study, 11 features were calculated [43,44,45,46], and their values are summarized in Table 1.

Table 1 Summary of texture feature maps computed using various techniques

Local binary patterns

A completely different method for texture characterization is represented by local binary patterns (LBP) [47, 48]. This texture operator calculates a different representation of the input images as a straightforward result of its execution. For each input pixel I(xc,yc), a neighborhood defined by radius R and the number of evenly sampled points on the radius P is specified. The LBP function is calculated as follows:

$${text{LB}}{{text{P}}_{P,{text{R}}}}left( {{x_{text{c}}},{y_{text{c}}}} right)=mathop sum limits_{{p=0}}^{{P – 1}} sleft[ {Ileft( {{x_{text{c}}},{y_{text{c}}}} right) – Ileft( {{x_p},{y_p}} right)} right] cdot {2^p},$$

Here, the operator s[.] returns 1 for positive values and 0 otherwise.

Laws’ texture energy measures

Another way to extract hidden information from the image is to use Laws’ texture energy measures (LAWS) [49]. This technique utilizes a predefined set of masks to calculate the local energy for a given image. These masks are designed to enable the detection of four texture characteristics, namely level, edge, spot, and ripple, which are then combined to describe more complex findings. In the present study, nine texture feature maps were derived using this technique.


Most of the proposed techniques are designed for 8-bit gray-scale images. Moreover, their implementation is performed at this data accuracy. Nevertheless, the processed data have a 12-bit precision; therefore, more information can be obtained using the entire scale. In previous studies, various methods were proposed to reduce the bit resolution and related noise level [50, 51]; however, the authors also considered the loss of information that occurred during the image resolution reduction from 12 to 8 bits and decided to examine whether an appropriate preprocessing method might improve the applicability of the proposed texture operator. Therefore, several transformations that highlighted certain ranges were considered. They were represented by histogram equalization (HEQ) and the statistical dominance algorithm (SDA) [52].

HEQ highlights larger areas with small illumination variations by increasing their contrast. Consequently, the tissue pattern becomes more noticeable, but some regions may be discriminated after the appearance of large objects. In this study, HEQ was performed both for the original 12-bit images (whose resolution was subsequently reduced to 8 bits) and for the images followed by method application. Although this technique is nondeterministic, its combination with other methods can create additional possibilities for data analysis. The order of performing the bit cut and HEQ procedures does not influence the generated texture feature maps.

The SDA represents another approach to the analysis of sensitive changes in image illumination. This technique is based on the equal treatment of mutual relations between pixels; nevertheless, their luminance values can be either very high or very low, which requires better visualization of the local texture pattern. The resulting SDA image is scaled to 8 bits, after which it is analyzed using a procedure similar to that used for the original image. As a result, all texture feature maps were calculated.


The results obtained after application of all of the above-described texture analysis techniques may lead to different computer responses with various distributions. Therefore, the original images in this study were analyzed by applying a histogram stretching (HSTR) algorithm after 8-bit normalization and HEQ procedure followed by scaling the results to the 8-bit resolution.

Summary of texture feature map analysis techniques

For the obtained DIR images, texture feature maps were constructed using the above-described preprocessing techniques, analysis methods, and post-processing approaches (their possible combinations are shown in Fig. 2).

Fig. 2

All possible entry data processing paths. SDA statistical dominance algorithm, CLU k-means clustering

As discussed in the previous sections, the utilized texture operators have various parameters. For the LBP method, it is the radius, and for CLU, the number of searched clusters and related number of colors in the resulting image are taken into account. After determining the statistical features of GLCM, FOF, GTDM, and RLM, the corresponding texture feature maps are calculated (see Table 1).

The parameters describing the areas of the analyzed neighborhoods should also be determined. For computation of texture feature maps, the moving window resolution was 21 × 21 pixels (as mentioned previously). For the LBP function, P = 8 was always used, while the radius varied from 5 to 20. Clustering was performed in the range of 10 to 50 clusters. When the images were preprocessed with the SDA, the domination threshold was set to zero to obtain results that were independent of the image quality or objects surface, and the radius was set to 20. Some of the discussed techniques may have additional parameters. The selected values are presented in the detailed descriptions of the corresponding methods.

Source link

Related Articles

Leave a Comment