IES were asked through the University of Strathclyde to take part in an Empirical Whole Model Validation Twin House Experiment for the IEA-ECB Programme’s Annex 71 project. This particular experiment was a validation exercise for simulation tools, helping to demonstrate that they are capable of accurate predictions that can be verified against real world measurements.
The IEA-EBC Programme is an international energy research and innovation programme in the buildings and communities’ field. It enables collaborative R&D projects among its 26 member countries.
For this experiment the test buildings were twin houses situated in Holzkirchen, Germany. The main purpose of the experiment was to validate how closely a range of simulation packages could predict the performance of these buildings. There was a number of performance variables in play over the course of the experiment. The two real buildings were very closely controlled in terms of how the spaces were operated. So unlike traditional households where you can only really make your best guess on operational data such as occupancy and heat gains within it, this was intentionally set up to be very closely controlled so that the performance inputs could be shared with the modellers to help ensure the best possible predictions.
This was on a larger scale than some of the other validation tests that IES take part in. Others such as ASHRAE 140 typically involve just a single zone model and the tests themselves in ASHRAE 140 are generally much more simplified where you would only perhaps be changing one parameter at a time. The Annex 71 experiment, by contrast from a validation exercise perspective, had a lot more going on than what we typically see in studies such as ASHRAE 140 tests. Different dynamic processes, such as conduction through a range of different construction materials, and inter-zonal air movement from one space into the other meant that the modellers got more realistic measured data, where there were a lot of opportunities for variations to creep in. For validation studies, this is the most detailed one, in terms of availability of operational data that IES has taken part in.
For the experiment there were two stages: a blind test and also what was referred to as an open test. During the blind test we were provided with information on the building itself; all of the construction details etc. and the user loads. From that information, we then used the IES Virtual Environment (IESVE) to predict what the resulting internal temperatures would be and what their resulting heating loads would be. Once we submitted our findings for that stage, there was an open stage where we were then told what the measured air temperatures and heating loads were. This allowed us to see how close our simulation was, find out the differences and see what we’d possibly change about our current modelling setup with that hindsight information.
IES's iSCAN tool was then used for the model calibration. iSCAN allowed the team to gather all the building and operational data in one central platform. The operation of the buildings changed every 5 minutes or so, with varied heat profiles and no two days being the same. All the profiles were provided to us in a spreadsheet format, which we were then able to readily import into iSCAN and then connect to the IESVE to generate the profiles within the VE model. iSCAN proved to be extremely useful for this exercise, as without it, just setting up such detailed profiles that changed every 5 minutes for the duration would have been near impossible.
Once all the key performance data was imported into the model, we were then able to set up all of the fabric characteristics to reflect what had been provided in the data sheets. We went into more detail than we ordinarily would in a project because we were provided with good information on the thermal bridging characteristics. Detailed information including wall to wall junctions was provided, which showed just how much additional heat loss had been determined from a detailed two-dimensional heat loss calc. So rather than just having a generic construction for the building, we were able to tailor that based on the different dimensions and positions of all the different walls and roofs etc. throughout the building. Having completed it was helpful to export the information from our VE results back into iSCAN to allow us to draw a direct comparison between the two.
The model calibration exercise helped us to validate how close the real-world metered data was to our simulated performance data. Using iSCAN we linked the particular data channels from the metered information back to VISTA variables. So then when we imported the VISTA results back in to the VE model we could then run different simulations, bring them into iSCAN and start drawing comparisons. This allowed us to effectively draw direct comparisons between what our simulation was projecting would happen versus the metered data to see how closely those aligned. In reviewing the results, it became apparent that they were extremely closely aligned, particularly with the heating energy results (see figure 20 below). Even at the blind phase where we didn't have the detailed data to look back at, we still had an extremely close agreement. Throughout the whole experiment there were only three instances where the simulation didn’t match the actual performance, and it was discovered that this was because the building was not operated as it had been scheduled to (see figure 22 below).
All through the simulation period our analysis of what we expected to happen closely aligned with what actually happened, which really helped to make the case that simulation modelling is extremely valuable as a prediction tool for understanding how a building is going to perform (see figure 21 below). With detailed operational data we were able to build a very accurate model using IESVE, in which the results could be relied on for making the right decisions.
Annex 71 IES Modelling Report Charts and Graphs
Figure 20: The comparison below on the weekly energy demand illustrates that not only is the overall energy a close comparison but also the week by week energy demand.
Figure 21: This plot compares the hourly heating and cooling loads from the Simulated results and Metered data, where it can again be observed that over a period of time the Simulated performance is extremely close to the Measured.
Figure 22: The CUSUM error plot helps identify times where a rapid discrepancy between the simulated and metered performance has occurred. However, in each of these instances the error was in fact a result of the test cell not operating as had been scheduled and not an issue with the simulation.