HBIM data

Process, difficulties, chances and changes

Ilka Viehmann: 2011-2015 Architecture studies, 2016-2018 Heritage Management studies, 2016-2019 work in various architecture and engineering offices, since 2019 Board Member of the association “Verein für Denkmalpflege Mönchshaus e.V.”, since 2020 Research Associate in the Sette Bassi project at the Leipzig University of Applied Sciences

Keywords: HBIM – historical building information modelling – data management – building research – built heritage

Villa of Sette Bassi drone photo, as point cloud and information model (WIP). Photo: I. Viehmann

Introduction

Research projects regarding built heritage accumulate an amount of data to be handled. The data must be acquired and reliable and then the process of transformation, reduction, sorting, analyzing, storing, and sharing begins. As with any research, the earlier the process and desired outcome are determined, the more substantial the work can be. In the field of historical building data, there is still some need for research in terms of the data processing.

Since 2020 a team of architects, building researchers, and surveyors from the Technische Universität Berlin and the Leipzig University of Applied Sciences are working on a research project on the Villa of Sette Bassi in Rome. The project is funded by the Deutsche Forschungsgemeinschaft (DFG) and supported by the German Archaeological Institute in Rome and the appia antica archeological park. The main buildings and the architectural garden of the ‘Villa dei Sette Bassi’ were built between 130-160 AD and cover an area around 5 hectares. Dating of the main complexes to the middle of the 2nd century can be made on the basis of numerous brick stamps.1 The whole area with several building complexes, including thermal baths, an aqueduct line, at least one cistern, a temple structure, and various other building structures covers up to 40 hectares. 

Besides the survey and process of understanding the villa as a complex, another goal of the research is to store and manage data on the buildings in a three-dimensional digital model. The methodology of heritage building information modeling (HBIM) will be adapted to the task. Starting with the process of modeling building data and testing different software and approaches, it quickly became clear that to test a methodology like HBIM, it is better to have more than one use case. This resulted in a dissertation on HBIM as a tool for building research and preservation with three heritage buildings. Besides the ancient villa complex, a late 17th-century hospital in a timber frame construction and a reinforced concrete carpet factory building from the early 20th century were chosen. This combination involves various building categories and issues, the state of preservation, different construction types and times, and miscellaneous forms of use.

Methods

The idea of a three dimensional digital model to store, structure and analyze the data is intended as a process that represents the data of the building complexes in a more efficient way. The buildings are three dimensional structures and the most important source on their own history. Therefore it is logical and consistent to also store the data three dimensionally. HBIM recreates the buildings based on its components like foundations, walls, ceilings, floors, windows, doors, and roofs. This way, data on a certain building component is linked to the component itself. A big challenge is the uncertainty about building parts, materials, and damages etc. and therefore to define the factor of uncertainty or lack of knowledge. Classifications like level of development (LOD), level of information (LOI) or a more demanding classification like the level of information needed (LOIN) can help to estimate the data.2,3 Those classifications, though defined as a DIN-standard, allow a range of interpretation that makes it adjustable to any project but on the other hand decreases the comparability.

A big part of data processing is data cleaning, which is said to consume up to 80 % of the time.4 In the case of the Villa of Sette Bassi, a lot of time is needed to prepare the point clouds. Due to the ruined state of the buildings and a great amount of vegetation near and in the complexes, all the points that don’t represent building parts will be filtered. This procedure can already be used to identify the building parts for the later modeling process. The model is being created in Autodesk Revit, which is proprietary software that focuses on building information modeling (BIM) and is reliable in handling point cloud data. The information model is based on the building parts, while additional findings are represented by mass objects. The geometric accuracy does not represent the as-is stage because the point cloud already does.5 All the modeled parts contain information on the properties and represent the building. In addition, information like the volume or surface area of an object will be generated automatically. Other information like the material is stored in a material library that can be adjusted, allowing the material to be linked to any object. So-called schedules allow the information to be sorted and exported as simple spreadsheets. Research results like the elaboration of building phases and reconstruction proposals will be represented in the model and can be compared among themselves. The three-dimensional representation by the model allows a better understanding of the building complexes.

Results

At the end all data must be shared. In this case, two dimensional drawings have major advantages because they can easily be shared or printed and stored in physical archives. Digital three-dimensional data can be shared through standardized formats and web-archives, but it is not as reliable as desired. Standards like the Industry Foundation Classes (IFC) can be used to share BIM data. But using this standard does not guarantee the desired output because heritage building data is diverse and can be interpreted very differently. Adjustments to the data might be necessary to ensure certain operability. On the other hand adjusting the data structure to standards like IFC can set some limits for using a methodology like HBIM for the data. It might lead to a very precise way of data organization that might not be the best structure for the research data. And also it’s a long chain of in- and output and IFC is not as reliable as desired because the quality of the IFC-model very much depends on the software it was developed in, the IFC format it is converted into and the software or web-viewer the model will be presented with. So rather than limiting the process of testing information modeling for heritage building structures at the very beginning, the focus lies on the usability of the methodology itself. Therefore, the initial question should rather be: What kind of data do I need to answer the questions about the built heritage and which way are they represented most understandably?

Data on the built environment can be as diverse as the built environment itself and even more. The initial questions regarding the object, the hardware used for the survey, the software used to process the data, and all the individuals working with the data have a major influence on the outcome. So for any other potential user of the data, it is very important to be able to understand the process of the data acquisition. Also it is important to mark the data gap. There might be a lack of data because the building structure is in a ruined state, parts of the building might not be accessible, vegetation can cover parts of the structure, there may be a lack of time and resources, or because it might just not be important to the research topic.
The reconstruction purpose of the Villa of Sette Bassi will be modeled in the same BIM environment as the other models. That allows a comparison between the reconstruction and the point cloud, and to easily understand which reconstruction parts are based on real building parts and which are supplements. Using a continuous coordinate system on which all data is based makes the data comparable. Even if it’s not stored in one single format, it can then be set in relation to each other and also to the building from which the data originates.

Discussion

The information model and point cloud are the main media to transmit the data of this research. The point cloud is much more objective and stands as representative of the building. The model is influenced by its purpose, the software, and the people creating it. Therefore the possibility of comparing the model and point cloud plays a key role. It is possible to store both the information model and the point cloud separately in formats that can be used by various users. For the model there is IFC and for the point cloud there are common formats like E57 or PTS. But making sure that the data published is a usable combination of the model–with building data, findings, conclusions, construction phases, and reconstruction proposal–and also the point cloud as representative of the “as-found” state of the building, is a challenge that is yet to be overcome. Only a shared coordinate system can assure the possibility to merge both data sets again reliably.

A common coordinate system and data transparency are essential for the accessibility of research data. It is necessary to make the whole work process from survey to model transparent and document it in order to create data that is beneficial to others. This should always be considered because creating data that can’t be understood is a waste of valuable research resources. The chosen software and formats are subjective intervention to data and only the built heritage itself represents the objective source. Even the building is subject to a process and does not remain in the state of the survey because it is exposed to destruction, restoration, rust etc. In case of destruction or severe deformation of the buildings, the point cloud and the model are available and can be used through standardized formats at best over the existing period of the building. Standards are very important to describe and share data and should be more adaptable to research purposes. On the other hand the research shouldn’t be limited by standardization because the data collected should primarily address the research topic. The publishing process must always be considered and different platforms for sharing digital building heritage data are being developed to close the gap. Accessibility should be provided and questions about ownership of data in publicly funded research should take a secondary role. Another aspect is that software developers should be held more accountable in terms of data transparency and accessibility, if necessary through public restrictions.


Footnotes
  1. Ina Seiler and Ulrich Weferling: “Rom, Italien. Die Villa von Sette Bassi. Die Arbeiten der Jahre 2017 und 2018.”, eDAI-F (2018), Issue 2-2018, 86–92. ↩︎
  2. BIMForum (Hg.) (2021): Level of Development (LOD). Specification for Building Information Models Part I, 16-17. ↩︎
  3. Building Information Modelling – Level of Information Need – Part 1: Concepts and principles, EN 17412-1:2020. ↩︎
  4. Gil Press, “Cleaning Big Data: Most Time-Consuming, Least Enjoyable Data Science Task, Survey Says,” last modified March 23, 2016, https://www.forbes.com/sites/gilpress/2016/03/23/data-preparation-most-time-consuming-least-enjoyable-data-science-task-survey-says. ↩︎
  5. Leonardo Paris, Maria Laura Rossi and Giorgia Cipriani, “Modeling as a Critical Process of Knowledge: Survey of Buildings in a State of Ruin,” IJGI 11 (2022-3), 172. ↩︎