Skip to main content

While visualizations of various types—such as maps, 3-D models, and animations—have become staples in digital humanities approaches to art and architectural history, how to integrate analog data (artworks and drawings, archival documents, and so on) into born-digital outputs remains a fundamental concern. This article discusses processes developed through ongoing work on the art historical visualization project Florence4D. It proposes an integrated approach where technologies for 3-D models, mapping, and location-aware augmented reality (AR) converge, while the research data is no more than a click away in structured ontology databases. The structure of the underlying data is key to creating a collaborative research space where the three broadly defined spatial technologies of 3-D and augmented reality, GPS, and geoinformation systems (GIS) interact, enabling researchers to move seamlessly between building-, local-, and urban-scale analysis and the interpretation of art, architecture, and urban design history.