|
METHODS
Polysiloxane putty (Lab-Putty, Coltène/Whaledent AG, Altstätten, Switzerland) was used to make partial moulds of the lower molars (m2 and m1) after which synthetic dental stone (Fujirock by GC Europe n.v., Leuwen, Belgium) was used to make replicas. Statistical calculations were carried out using the statistics package JMP 8.0.2 (SAS Institute, Inc., Cary, North Carolina, USA), except for the bar graphs and hierarchical clustering, which were produced in MATLAB (Version 6.5.0. 180913a Release 13, The MathWorks, Inc.). Euclidean distance and complete linkage were used in the hierarchical clustering.
Mesowear Analysis of Upper Molars
The original mesowear method, in which cusp apices were visually scored as sharp, rounded, or blunt, and the valleys as high or low (Fortelius and Solounias 2000), was applied for the first and second upper molars. Only sharper apices were scored. The index of hypsodonty (Janis 1988) was excluded from this study. A hierarchical cluster analysis of the mesowear data of the upper molars was performed. In this analysis, following
Fortelius and Solounias (2000, figure 2b), we used the relative sharpness, the relative bluntness, and the relative height of the molar cusps as variables.
Wear Pattern Analyses of Lower Molars
For the analysis of the lower molars, we constructed digital 3D models of those parts of the molars that are relevant in the methods used. We used two different methods for the analysis of the lower molars. In the first method the angle between the buccal side surface and the enamel edge was measured, whereas the second method was based on a visual inspection of the buccal side enamel edge, used to determine the presence of facets.
Obtaining 3D representations. To be able to analyse the specimens off-line or by computer, we had to obtain digital 3D representations of the teeth. The 3D representations were mostly determined from casts, but teeth from Recent species as well as fossilized teeth were also used as starting points.
The technique we used for obtaining the 3D models is based on an idea where multiple ordinary (2D) photographs showing the target object from different directions are computationally combined to yield a 3D representation of that object. This so-called multi-view stereo (MVS) technique is much faster and more convenient to apply in a field setting than, say, the use of, for example, a needle scanner or a laser scanner. It, however, provides a somewhat lower resolution for the representation. Hence, in order to get some idea of the actual resolution, we have compared the representations obtained by MVS to the representations obtained by a needle scanner and a laser scanner.
For the 3D reconstruction computations we used the combination of two computer programs, Bundler (version 0.3) (Snavely et al. 2006;
Bundler:
Structure from Motion for Unordered Image Collections) and PMVS (Patch-based Multi-View Stereo Software, version 1) (Furukawa and Ponce 2007;
Patch-Based Multi-View Stereo Software - PMVS). The former program was used to estimate the camera parameters (focal length, relative location of camera to specimen, and in what direction the camera was pointed), and the latter program was used to obtain the 3D reconstruction.
Typically some 40 photographs were taken of the same specimen in order to obtain an adequate 3D model. While experimenting with the process, it became clear that the Bundler-PMVS combination requires that care has to be taken both in photographing the specimen as well as in appropriate pre-processing of the images. Indeed, for the programs to be able to find the needed common features more or less uniquely from the different photographs, it was necessary to ensure that the interesting parts of each photograph were in focus and well lit. This mandated that external lighting had to be used together with a long focal length setting on the camera, and furthermore, that the most out-of-focus parts of the photographs were masked. In addition, masking was also used to single-out the interesting parts of each photograph, that is, the parts showing those features of each tooth we were interested in. This was done, because experience showed that for the programs not to hang when processing high-resolution images, the total number of pixels in each image had to be limited. Finally, for the algorithms used by the programs to work, it was also necessary to ensure that there were sufficiently many common details or features that span a number of the photographs. That is, when going from one image to the next (in order), there has to be a sufficient number of common features in each image pair.
The photographs were taken with two different cameras, a Nikon D70 with an AF Micro-Nikkor 60 mm lens and with a Nikon Coolpix 4500. The specimens were placed on a table and secured in place by plasticine. They were illuminated by a pair of 150W construction-site lanterns, which were kept in fixed positions with respect to the table and to the specimen between the photographs. The lights were fixed in place, because as the 3D reconstruction is based on finding common features across a stack of images, it is necessary to ensure that the apparent surface texture, including highlights and shadows, is stationary across all of the photographs. This
issue becomes apparent when trying to use the procedure to obtain 3D models from photographs of actual teeth, which typically are quite shiny. The specular reflection implied by the shine means that a part of the surface texture consists of the mirror-image of the environment, especially of the light source(s). This mirror-image by its very nature "wanders" around when the point of view is changed. As a consequence, and since all highlights resemble each other and represent (over)prominent features in the photographs, highlights tend to throw off the 3D reconstruction procedure. To overcome these difficulties, a diffuser (a sheet of baking paper) was introduced between the lights and the specimen when photographing actual teeth.
Once a stack of photographs of a specimen had been obtained, these images were fed to the Bundler program, which returns the camera parameters associated with each image. It also undistorts (tries to undo geometrical distortions originating in the camera optics) each image. The masking of the images to only retain in-focus regions of interest was done to the undistorted images, and a set of masks corresponding to each image was thus generated.
The masking was done by hand. The undistorted images, their masks, and the associated camera parameters were then passed on to the PMVS program, which used them to build the 3D model. The output from the PMVS program is a set of patches in three-dimensional space. To get more flexibility in the choice of tools used to further process the models, we used the conversion programs "patch2pset.exe" and "PoissonRecon.32.exe", provided with the PMVS software distribution, to convert the patch-based model into a point cloud representation -based mesh. The point cloud essentially contains the three-dimensional coordinates of a huge number of points that PMVS has determined define (lie on) the surface of the specimen.
The 3D point cloud representation of the specimen was then further processed by the Meshlab software (Meshlab v1.2.1.1,
Visual Computing Lab–ISTI–CNR). First of all, those parts of the representation, which did not represent regions of interest, including the artificial features introduced by PoissonRecon.32.exe when creating the mesh, were cut away from the 3D model. Because the 3D model produced by the PMVS program tends to be rather noisy (see
Figure 2), the model was then smoothed by applying the Laplacian smooth –algorithm of Meshlab
with parameter 10 seven times in a row to the model. We note that careful
smoothing of a noisy model does not really alter the available true information,
but rather changes the shape of the erroneous deviations into a visually more
pleasing form. Once the model was smoothed, its vertex (point) count was reduced
by removing those vertices that lie so close together as to almost providing no
additional detail. The procedure was performed with the Merge close vertices – algorithm of Meshlab, with parameter 10. The vertex reduction makes the model more usable in other programs (it leads to a smaller memory usage and decreased processing times). The progress of the smoothing procedure and the subsequent vertex removal is represented in
Figure 2, which depicts the result after 0, 3, and 7 smoothing steps, and after the vertex removal, respectively.
For comparison the results obtained by a needle scanner (Roland Picza PIX-4 3D, 50
μm nominal resolution) and a laser scanner (Nextec Hawk 3D laser scanner, 30
μm nominal resolution) of the same specimen, as well as a photograph taken from roughly the same direction as the models are rendered, are also presented in
Figure 2. From the figure it is clear that the results of the needle scanner and the laser scanner provide somewhat more detail than the photograph-based reconstruction. However, more important for our present purpose is the fact that this comparison shows that the detail present in the MVS reconstruction, including the smoothing and vertex removal procedures, faithfully represents the specimen. The reconstruction can therefore, within the accuracy provided by it, be used to reliably analyse the specimen.
Analysing the angles. We measured the angle between the buccal side enamel edge and the buccal surface of molars. In browsers, where the enamel edge tends to develop a facet, this angle is larger than in grazers, where no facets develop. We investigated the lower molars m1 and m2, and measured the angle at two locations, in the vicinity of the hypoconid and protoconid tip regions. That is, approximately at the common edge of the facets one and two, and the facets six and seven (facet numbering according to
Butler 1952), respectively. This choice of measurement location was dictated by practical reasons, since it made it relatively easy to measure the angles in a consistent manner for different molars.
The measurements were done on the reconstructed 3D models. For this purpose we used the VRMesh software (VRMesh v.4.1.2 Studio, Copyright 2003-2008, Virtual Grid Company). In order to measure the angle, we first chose a suitable section of the region around the hypoconid or protoconid tip. We then selected two closely spaced points on the boundary edge of the facet or on the enamel edge, depending on whether the molar had a facet or not. The line passing through these two points (indicating the direction of the edge) was then interpreted as the normal of a plane, which bisects the model and lies (roughly) perpendicular to the edge (see
Figure 3.1). Next we delete those parts of the model that lay on one side of this plane, thus revealing the surface profile around the edge, as is shown in
Figure 3.2. The model was then rotated so that the aforementioned normal coincided with the direction of viewing in VRMesh (see
Figure 3.3-4). This rotation produced an effective 2D representation of the surface profile and especially of the angle between the buccal side enamel edge and the buccal surface. Finally, we used the tools provided in VRMesh to measure the angle (alpha) from this 2D representation (see
Figure 3.4).
Visual analysis. We also analysed the lower molars visually. In this analysis the molars were divided into groups based on the number of buccal side enamel edges with facets.
The presence of facets was determined by visual inspection. We used two kinds of groupings or variations of this method.
In the first variation we determined the presence of the four enamel edge facets, one, two, six, and seven separately, and computed the total number of facets present. This variation thus yields a number between 0 and 4, with 0 indicating that no facets are present, and 4 indicating that facets are present everywhere on the enamel edge (see
Figure 4).
In the second, simpler, variation we classified the lower molars into two groups, those with at least one buccal facet, and those with no buccal facets. Thus a molar was classified as facetted if any of the facets one, two, six, seven, or any combination thereof was present, otherwise it was classified as not facetted.
|