Books+ Search Results

Acquiring the Shape and Appearance of Physical Scenes

Author
Title
Acquiring the Shape and Appearance of Physical Scenes [electronic resource].
ISBN
9781124806211
Physical Description
1 online resource (111 p.)
Local Notes
Access is available to the Yale community.
Notes
Source: Dissertation Abstracts International, Volume: 72-10, Section: B, page: 6129.
Adviser: Holly Rushmeier.
Access and use
Access restricted by licensing agreement.
Summary
Three dimensional digital models of physical scenes are frequently needed in computer graphics applications. Example applications include the documentation and exposition of cultural heritage, the design of proposed building renovations and virtual environments for training. The digital models for these applications must contain both accurate shape and appearance information to be rendered photo-realistically under varying conditions. Most importantly these models must be complete---data holes and blank surfaces are unacceptable in applications. There exist various methods to acquire such models. Each method has limitations in producing complete models, based on the assumptions about the scene required for the method to function. In this dissertation, we present a new set of methods for acquiring scenes that can be used when existing methods fail. These new methods require either no or very simple assumptions about the scene to produce meaningful results.
In the area of appearance capture, we focus on acquiring the Lambertian reflectances in a scene. In the area of shape capture, we focus on methods for capturing individual objects that do not rely on the objects having any particular optical property, and on placing models of such objects in a complex scene that has been partially captured using a laser scanner. Our work then makes three major contributions---one in appearance capture, one in object modeling and one in scene assembly.
First, we consider the problem of capturing the appearance of large scale objects and scenes. The models of such large structures can be generated using range scans from a time-of-flight laser scanner and images from a digital camera. Existing methods for computing reflectances from scans and images assume that the illumination of the scene can be controlled or measured. In most cases, for scenes the illumination cannot be controlled and dense spatial measurements of illumination are not possible. In such scenes, existing methods fail to produce an estimate of the scene reflectances. We remove the assumption of controlled or measured illumination by developing a method that relies only on data we know can be obtained from the time-of-flight scanner. We present a system for processing multiple color images into an integrated map of diffuse reflectance values that makes use of the laser scanner return intensity and the captured geometry.
Second, we consider capturing the geometry of individual objects in scenes. Most capture methods assume that a scanner or camera can be positioned to get a full view of the object, that the object is opaque, and/or that it has some identifiable surface texture. If an object is not visible from a scanner or camera position or is optically uncooperative, existing methods fail to produce a model. We present techniques for capture that avoid all of these assumptions. Our new techniques for capturing the shape of physical objects use simple tools such as calipers, contour gages and markings on paper. These methods work by using multi-dimensional scaling to convert pairwise point measurements made with calipers to create a network of 3D points. Profiles traced on paper from contour gages are placed relative to the 3D network by tracing contours including caliper-measured points. We use geodesic distances recorded on paper wrapped on the object to refine the initial polygonal shape to a smooth surface. We also demonstrate that the models we construct can be used to improve optical approaches for model capture.
Third, we consider modeling and placing missing elements in architectural models when it is impossible to place the scanner in positions to view all surfaces. Existing scanner technologies fail to capture complete scenes, leaving many geometric holes. Missing elements can be generally be modeled as simplified shapes derived from drawings or captured by our new simple tools methods. We exploit (but do not rely on) the fact that in architecture spaces there are often multiple instances of objects that can be combined to improve the object model. We build on previous work to represent models abstractly as graphs of relationships between basic shapes such as planes, cylinders, and spheres. We present an automatic approach to search for incomplete instances of objects using abstract shape representations of both simple models of individual objects and of the large detailed point cloud that is produced by scanning the scene. The simple models can be used to fill in the missing objects, and the partially scanned portions of multiple object instances can be combined to refine the scene model.
For all of our contributions, we demonstrate our results with examples from cultural heritage applications. For appearance properties, we compute the reflectances for an existing large sculpture and vaulted ceiling on the Yale campus. For individual objects, we capture models of detailed carvings situated in Yale's Sterling Library. For filling in a large architectural scene, we use a laser scan and simple models from a historic synagogue in New Haven.
Format
Books / Online / Dissertations & Theses
Language
English
Added to Catalog
October 03, 2012
Thesis note
Thesis (Ph.D.)--Yale University, 2011.
Also listed under
Yale University.
Citation

Available from:

Online
Loading holdings.
Unable to load. Retry?
Loading holdings...
Unable to load. Retry?