Ontario researchers might one day know not only what Mona Lisa was trying to convey with her enigmatic smile — but they hope to know how as well.
Toronto-based Sheridan Institute of Technology and Advanced Learning, along
with the University of Toronto and Queens University, is using a Silicon Graphics Onyx4 UltimateVision system to create what is expected to be the first research-quality simulated and interactive 3-D human face, isolating the facial patterns people use to communicate.
The researchers, who likne their work to what geneticists do in isolating the function of a single gene, hope to discover not only how humans convey emotion through facial expressions, but to better understand the impact each expression or motion has on human subjects.
Avrim Katzman, director of Sheridan’s Visualization Design Institute, says the college has been working on the five-year project for about a year and a half but only recently decided to speak about it publicly. Katzman says Sheridan felt it was time to diversify from the entertainment field and to look for opportunities in areas such as visualization in the sciences and engineering.
“”We’re trying to create a human head which is realistic, based on scientific principle but which is also easily modified by psychological researchers so they can experiment with various modes of communication,”” says Katzman.
The head is being built by using a 3-D scanner to capture a real human face and the model’s physical characteristics will be measured with laser beams and recorded digitally.
The applications for the research range from the medically therapeutic to cross-cultural.
“”Once scientists have a more complete model, then we can apply that knowledge to areas where there are deficiencies in communication such as autism,”” he says. “”Certain kinds of brain injuries might be able to be diagnosed or we might find more effective ways of interviewing people. Another of the aspects is studying communication across different cultures (to see if) we are misinterpreting what people are saying — not just the words but their expressions.””
And while the graphics-intense nature of the research requires a huge amount of computing power to build a face that functions in real time, the challenge of the research itself is in the sheer volume of ever-shifting information that has to be pieced together, says Katzman.
“”As we gather bits and pieces of the puzzle we think we’re getting closer to getting the complete picture but the picture keeps shifting, and each new piece affects the rest of the puzzle, so that’s the challenge — that you may be led in false directions.””
According to Walter Stewart, director of business development for SGI in Canada and global co-ordinator of the vendor’s grid strategy, designing a system capable of providing the processing power required for the project started with calculating the number of pixels that needed to be created as well as the speed at which they needed to be created and the size of the data set.
“”If you think of all the planes and musculature on a human face you’re going to have a very large data set,”” says Stewart. “”That’s why you are going to need some very significant processing power.””
The Onyx4 UlimateVision, based on the NUMAflex architecture, provides the visualization power of up to 32 graphics processors.
“”It’s a system that represents a major breakthrough for SGI in that historically we have had to develop our own graphics chips, our own operating system and our own processing chips to make this happen, but what we’re beginning to do is find ways of using commodity product and putting that commodity product into SGI’s unique architecture,”” says Stewart. “”We are using commonly available graphics cards but because of our architecture we are able to stack all those graphics cards and get them all working as one to create that ability (to process huge data sets.).””
In the future, Stewart believes, data will be increasingly represented in graphic formats, which he says are easier for the human brain to process.