See the overview page for the links to the rest of my notes. This day started with two keynotes.
Of the afternoon sessions I only attended a few talks and made no notes (laptop at 2% battery). The presentation by Anders Ekholm was the most interesting, I had already made notes of that one in a toronto workshop Vladimir Bazjanac (Lawrence berkeley national laboratory) - Virtual building environments: applying information modelling to buildings.
A well-known term is BIM, the Building Information Model. He prefers "virtual building environment". (Sorry, I missed the first five minutes).
A virtual building environment is software that as a group define a building, its parts, behaviour and performance; manipulate data that are used in the planning, design, etc.; make it possible to conduct experiments. Software used by experts who also have the corresponding professional experience.
A VBE (virtual building environment) facility is a physical place where end users can get true help. Also expert assistance in creating and operating virtual buildings for real life projects. Also it provides training opportunities.
(Note: this apparently is aimed at the big companies and the big projects.)
The purpost of the VBE facility is to support real-life projects of any complexity. Also they can develop new extensions of the IFC data model as needed. (Neat idea! Having that possibility, extending IFC for your company's needs. That's real powerful. A bit similar to the possiblities you have with open source software and a local expert that can modify it to your needs.)
An example that he gave was the new Freedom tower in New York with some walls that could not be modelled with IFC. So they wrote GDL objects to handle it and changed the IFC interface. (I'm writing this down, almost purring like a cat... I mean, this sounds like a couple of bright persons doing a real good job. It's probably the hacker in me that likes this so much.)
Requirements consist of the needs and wants of clients and users, functional and operational aspects, quality determants, etc.
Regarding requirements there are some challenges, misconceptions
Traceability is very important. "Achieving traceability aids in keeping the design in line with the goals of the end product" (quote).
He mentioned a program for requirement capture made by one of his students: Rabbit. It had a good-looking user interface, so you might want to look at it.
They've now devised a framework: computational hybrid assistance for requirement management (CHARM). They did some interfacing with IFC. You could add requirement properties to IFC objects, but the user needed to be aware of it that it was done.
He mentioned Wim Ghieling's GARM model as an example that other people had also tried to link requirements to design. The GARM has Functional Units with requirements and Technical Solutions with characteristics. A FU decomposes into TSs, A TS fulfills one or more FU. (Funny that he mentions it in this way, we at the TU Delft have just started an effort to build an object library network with exactly this functional unit/technical solution distinction as the backbone. See objects.bcxml.net for an example. We present it a bit more as demand objects and supply objects, but the effect is more or less the same.)
In answer to a question: they haven't yet looked at the so-called eXtreme or agile methods of design, where you move much more back and forth from requirements to design, in cooperation with the client. But it sounds very interesting
Model based technologies: the data objects model the real-world objects. Object-based cad systems are an example, word processors a counter-example. Model-based interoperability is the exchange of information using models and exchange standards, etc.
But. Much project information will remain as unstructured documents. You can improve it with, for instance, document management systems and meta-data.
Humans interact with computers. You have to look at the entire social/technical system: other humans also interact with computers. The humans interact amongst eachother, likewise, possibly, the computers. Humans can communicate directly or by sharing documents. But that kind of sharing is, seen from the system, ineffective. A human interprets computer content, gives it to another human, who puts it back in in another form.
IFC etcetera allows direct computer communication. But this is not necessarily enough. Ok, you get an IFC file. What has changed? Why? What is the intention? What exactly do I have to do?
So you still need human communication to complement the purely model-based exchange. Ideal would then be to link a model-based exchange to an accompanying electronic document.
Cross-referencing is straightforward, but effective for many types of functions. You could link an IFC object to a document on a website. Just works. But there are problems with levels of aggregation: one document can deal with multiple objects. Multiple links. What if the document changes a bit for just a few of those objects?
Text mining and search is covered in the next presentation.
Other options are hybrid documents (containing both model data and textual data for instance) and having a presentation layer.
This presentation layer can be static or dynamic. Static means capturing a view of the data at a specific point in time.
The presentation-layer documents give snapshots of the model data to convey specific information for specific purposes in human-to-human communication.
Conclusion: model-based information technologies and document-based information technologies should be integrated and used to support transactions that support human-to-human communication.
Documents are important, a lot of information is contained in them. He presented an approach to
There are a few possibilities for structured documents:
They didn't do anything on these subjects, but looked at unstructured documents instead. Text mining is a good tactic to tackle this. It is a three-step process:
They used bayesian networks, directed acyclic graphs. Nodes are random variables, arcs their causal relationships. Knowledge is represented qualitatively by the dependencies between nodes, quantatively by the strength of those dependencies (detemined bayesianly). (Note: for a readable introduction of just the probability part of Bayes just read how to get rid of spam)
They implemented a four-layer bayesian mining network. (I won't burn my fingers by trying to explain it here, read the paper. It looks like a workable idea. To me, it looks like a mix between calculating probabilities the bayesian way and neural networks. I'll probably commit heresy by looking at it that way, though :-)
One of the layers is the concept model layer, which you might see as an ontology layer. That layer is, via an intermediary layer, connected to the documents. On the other side, it is connected (bayesianly) with the product model. Using the bayesian probabilities, items in the product model are related to documents.
Nice thing about bayesian networks is that it doesn't care what a node is. Either terms lifted from the document or metadata associated with the document. Or context where you found the document. All these pieces of information, used as nodes, help the network in finding the correct documents.
His conclusion is that a flexible framework for connecting the model and the document world is attractive to have. And that his solution seems like it is able to handle this.
My name is Reinout van Rees and I work a lot with Python (programming language) and Django (website framework). I live in The Netherlands and I'm happily married to Annie van Rees-Kooiman.
Most of my website content is in my weblog. You can keep up to date by subscribing to the automatic feeds (for instance with Google reader):