ΕΝΤΥΠΑ ΥΠΟΒΟΛΗΣ ΤΕΛΙΚΗΣ ΕΚΘΕΣΗΣ ΠΡΟΟΔΟΥ ΕΡΕΥΝΗΤΙΚΟΥ ΕΡΓΟΥ ΤΗΣ ΔΕΣΜΗΣ 2008 Η Τελική Έκθεση, υποβάλλεται σε δύο αντίγραφα, μέχρι και δύο μήνες μετά τη συμπλήρωση της διάρκειας του χρόνου υλοποίησης του Έργου. Η Τελική Έκθεση περιλαμβάνει δύο μέρη: ΜΕΡΟΣ Α - Τελική Έκθεση Προόδου Α.1. «Γενικά Στοιχεία Έργου» Α.2. «Περίληψη» έκτασης 500 λέξεων για την πορεία υλοποίησης του Έργου. Α.3. «Τελική Έκθεση Υλοποίησης Έργου» Περιλαμβάνει τη συμπλήρωση ενός τυποποιημένου Δελτίου Αναφοράς για κάθε ΔΕ του Έργου. Α.4. «Συνοπτικός Πίνακας Δεσμών Εργασίας» όπου φαίνεται η πορεία υλοποίησης των Δεσμών Εργασίας και τα Παραδοτέα που προέκυψαν από κάθε μία απ αυτές. ΜΕΡΟΣ Β Παραρτήματα Β.1. Στο «Παράρτημα Β1» της έκθεσης επισυνάπτονται τα παραδοτέα του έργου που μπορούν να δοθούν σε έντυπη μορφή. Β.2. Στο «Παράρτημα Β2» της έκθεσης επισυνάπτονται οποιεσδήποτε άλλες πληροφορίες αναφορικά με το έργο κρίνονται απαραίτητες. Η Τελική Έκθεση Οικονομικών Πεπραγμένων υποβάλλεται σε ξεχωριστά έντυπα που είναι διαθέσιμα στον Ιστοχώρο του Ιδρύματος Προώθησης Έρευνας σε μορφή αρχείου Excel.
Α.1. ΓΕΝΙΚΑ ΣΤΟΙΧΕΙΑ ΕΡΓΟΥ Μ Ε Ρ Ο Σ Α Επιχειρησιακό Πρόγραμμα Άξονας Προτεραιότητας Πρόγραμμα Δράση Αριθμός Πρωτοκόλλου Έργου Τίτλος Έργου Ανάδοχος Φορέας Συντονιστής Έργου Αειφόρος Ανάπτυξη και Ανταγωνιστικότητα Κοινωνία της Γνώσης και Καινοτομία ΤΕΧΝΟΛΟΓΙΕΣ ΠΛΗΡΟΦΟΡΙΚΗΣ ΚΑΙ ΕΠΙΚΟΙΝΩΝΙΩΝ ΤΕΧΝΟΛΟΓΙΕΣ ΠΛΗΡΟΦΟΡΙΚΗΣ ΤΠΕ/ΠΛΗΡΟ/0308(ΒΙΕ)/03 Synthesis of Dynamic Characters with Motion Capture Data for Human Figure Animation: Educating the Cyprus Police Force Frederick Research Center (FRC) Dr. Stephania Loizidou Himona Ημερομηνία Έναρξης Έργου 01/12/2009 Ημερομηνία Λήξης Έργου 31/08/2012 Ημερομηνία Υποβολής Έκθεσης 31/10/2012 Εγκεκριμένη Επιχορήγηση: Ποσό που καταβλήθηκε από το ΙΠΕ: (μέχρι στιγμής) Ποσό που δαπανήθηκε: (μέχρι στιγμής) 123,954.00 Euro 99,163.20 Euro 118,214.00 Euro Στοιχεία επικοινωνίας με τον Συντονιστή Έργου: Διεύθυνση: 7-9 Filokiprou, Palouriotissa, 1036 Nicosia Τηλέφωνα: 22-345159, 99-620466 Τηλεομοιότυπο: 22-438234 Ηλεκτρονικό ταχυδρομείο: com.ls@frederick.ac.cy
Α.2. «ΠΕΡΙΛΗΨΗ» (500 λέξεις - 2 σελίδες) The successful completion of this project is now a reality. Although some extension over time was required for completing certain tasks of the project (due to reasons that are explained in WP1 and thus have been approved) the set up work came to an end; without necessarily meaning that the work is over; the set up is such that this can go on with further exciting results. The possibilities are endless. However, most of the set up goals have been met and we are in the pleasant situation in being able to report so. Basically, the pilot system for training the Cyprus Police force is now in place and most of the major problems reported have been tackled and solved. Specifically, for the realistic motion of the avatars we have employed two methods, namely Motion Capture and Dynamics of Motion and we have identified when it is more appropriate and rather possible to utilize each one of them. Specific emphasis was given to the Motion Capture method, a method that it is more suitable for commonly performed motion sequences, since for the requirements of the project we have acquired and installed a highly accurate state-of-the-art Motion Capture and Tracking system. For the use of the diverse recorded motions and their combinations we have implemented the Motion Graphs algorithm, an advanced method for splicing motions together. We have also used ragdoll simulation (physically-based animation) for generating animation of avatars in extreme or difficult situations. We have identified that both of these methods are necessary to simulate realistically virtual characters for the 3D training simulator and thus we automatically switch between the two using triggering events. This relies on user interaction (in particular to the use of the trainee s firearm). We have also examined the integration of facial expressions into the system as well as the integration of realistic full body modeling of the human beings; since it is important for the trainees to be able to interpret and/or understand the emotions of the suspects so as to predict their next movements and/or prevent aggressive behavior. The last part of the work has been the development and testing of the platform. Simple scenarios have been created involving multiple responsive 3D characters for the police trainees to engage in. In order to identify how the realistic motion of avatars improves the performance of the trainees in our pilot system, two categories of scenarios have been setup. Basically, with or without responsiveness of the trainees actions. A small number of users have been used in both categories; however future work could be the integration of larger number of suspects for
obtaining, analyzing and collating solid experimental results. On the other hand, preliminary experiments indicate that the responsive characters could increase the sense of immersion in the 3D virtual training application and thus the presence of the trainee in the virtual world is reinforced.
Α.3. «ΤΕΛΙΚΗ ΕΚΘΕΣΗ ΥΛΟΠΟΙΗΣΗΣ ΕΡΓΟΥ» (μέχρι 1500 λέξεις - 5 σελίδες ανά ΔΕ) Τίτλος Δέσμης Εργασίας ΔΕ1: Project Management Κωδικός Φορέα ΑΦ ΣΦ1 ΣΦ2 ΣΦ3 ΣΦ4 Ανθρωπομήνες για κάθε Φορέα (με βάση το Συμβόλαιο) Ανθρωπομήνες για κάθε Φορέα (δεδουλευμένοι) Στόχοι Δέσμης Εργασίας 3.5 0.5 3.9 0.5 Αναφέρονται επιγραμματικά οι στόχοι της παρούσας Δέσμης Εργασίας. The smooth coordination of the project has been mainly the aim of this work package. This involved the monitoring of the proper running of the program as well as the checking of the delivery of the expected work. It also dealt with the management of the financial resource (budget) as well as its proper allocation. WP1 has been performed throughout the duration of the project. It was mainly done by the host organization with additional support from PA1. Περιγραφή Εργασίας Καταγράφονται οι δραστηριότητες που αφορούν στη Διαχείριση του Έργου, τυχόν προβλήματα που προέκυψαν στα πλαίσια του Συντονισμού του Δικτύου Συνεργασίας και πως αυτά επιλύθηκαν. This work package has been responsible for the smooth coordination and the management of the project. The coordination involved the communication between all the team members through both physical weekly meetings as well as the use of the technology for fast communication (e.g. through the website of the project, email and teleconferencing). The management involved setting up the criteria and standards of the work, the supervision of the team and the management of the crises. Specifically, it has been responsible for the preparation as well as the dissemination of the deliverables as expected and on time. The management team was responsible for the preparation of the 6-month, the interim and the 18-month progress reports and the preparation of the current final report. Additionally, it has been responsible for the monitoring of the development of the software (D6), the hybrid system for motion control. This is the Hybrid system involving the utilization of the Dynamics of Motion together with the Motion Capture data through the use of the Motion Graphs (see WP4). WP1 involved also the management of the financial resources. That is, the proper allocation of the budget to all the partners involved. However, due to the late arrival of the 2 nd installment (as well as the late delivery of the motion capture system together with its installation), the requirement for requesting extension of the actual completion date of the work was necessary, something which once reported has actually been approved. Initially, a 6-month extension
period was given (ending 31 st May 2012) and since there was a further delay in the 2 nd installment (delivered 29 th March 2012) a further 3-month extension was added (finally ending 31 st August 2012). This proved necessary for enabling us to finish certain tasks of the project and their required set up goals. It needs to be reported that due to this time extension given, an extra man/month (1/2 of a month) was actually required for completing the work. Specifically, the coordinator of the project had been assigned and had worked an extra half of a month within the extension period for coordinating and finalizing the work (spread over the months of July and August 2012), as well as for the dissemination of the results (D9 in WP2). Παραδοτέα Αναφέρονται τα Παραδοτέα που προέκυψαν μέσα από τη συγκεκριμένη Δέσμη Εργασίας. Στα πλαίσια της Δέσμης Εργασίας 1, ως παραδοτέα περιλαμβάνονται οι Εκθέσεις Προόδου (Εξαμηνιαίες, Ενδιάμεση και Τελική Έκθεση Προόδου) που θα πρέπει να υποβληθούν στο ΙΠΕ κατά τη διάρκεια του Έργου. 1. D1: 6-month progress report (submitted 31 st May 2010) 2. D2: Interim report (submitted 30 th November 2010) 3. D3: 18-month progress report (submitted 31 st May 2011) 4. D4: Final report (current) 5. Figures (see Appendix Παράρτημα Γ1) 6. Timesheets
Τίτλος Δέσμης Εργασίας ΔΕ2: Dissemination and Exploitation of Results Κωδικός Φορέα ΑΦ ΣΦ1 ΣΦ2 ΣΦ3 ΣΦ4 Ανθρωπομήνες για κάθε Φορέα (με βάση το Συμβόλαιο) Ανθρωπομήνες για κάθε Φορέα (δεδουλευμένοι) Στόχοι Δέσμης Εργασίας (όπως περιγράφονται στο Συμβόλαιο) Αναφέρονται επιγραμματικά οι στόχοι της παρούσας Δέσμης Εργασίας. 1.5 1.5 0.5 0.5 1.6 1.5 0.5 0.5 The main aim of this work package is the dissemination of the results that have actually emerged from the proposed research project. Περιγραφή Εργασίας- Βαθμός Υλοποίησης των στόχων της Δέσμης Εργασίας Καταγράφονται οι δραστηριότητες που εντάσσονται στη Διάχυση και Εκμετάλλευση Αποτελεσμάτων. Γίνεται εκτενής αναφορά στο βαθμό υλοποίησής τους, σε πιθανά προβλήματα που προέκυψαν και σε τυχόν αποκλίσεις από τους αρχικούς στόχους. Όπου εφαρμόζεται, γίνονται αριθμητικές και ποσοτικές αναφορές στα επί μέρους στάδια της ΔΕ και γίνεται, επίσης, σαφής προσδιορισμός στο Φορέα που ανάλαβε και διεκπεραίωσε την κάθε δραστηριότητα. A website portal for the project has been implemented (http://simpol.ploegos.com/). This has been used not only for informing the general public of the project s progress so far, but also as a means of communicating between all the parties in the consortium involved. The website has been continuously updated to reflect the status of the project at any given time. This website has been developed mainly by an independent freelancer (web-designer) who has been paid for his work. PA3 took over the tasks of maintaining and updating the website as required. Moreover, a URL related to the host organisation: http://research.frederick.ac.cy/simpol has also been linked to the website s current location that makes it easier to be accessed by the academic community (HO) via popular search engines in the www. Two research papers have been published so far (D8 and D9). The first one (D8) has been presented at the MCIS2011 conference (the 6 th Mediterranean Conference in Information Systems) which was held between the 3rd and the 5 th of September 2011 in Limassol, Cyprus and published in the respective proceedings: S. L. Himona, E. Stavrakis, A. Loizides, A. Savva, and Y. Chrysanthou, SIMPOL VR - A Virtual Reality Law Enforcement Training Simulator, in the 6th Mediterranean Conference on Information Systems (MCIS 2011), paper22, 2011 (http://aisel.aisnet.org/mcis2011/22/). The second one (D9), has been presented at the 4 th International Euro-Mediterranean Conference on Cultural Heritage (EUROMED2012), which is held between 29 th October and 3 rd November 2012 in Limassol, Cyprus. This research was published as: E. Stavrakis, A. Aristidou, M. Savva, S. Himona, and Y. Chrysanthou, Digitization of Cypriot Folk Dances, in Progress in Cultural Heritage Preservation, Lecture Notes in Computer Science, vol. 7616, Springer Berlin / Heidelberg, 2012, pp. 404 413
(http://www.springerlink.com/content/32260t05m042033m). Results of the project were also briefly presented through oral presentations at the following two conferences: "e-learning For All" Conference, 6th October 2012, Eugenides Foundation, Athens, Greece. Interaction Design and Human Computer Interaction Workshop, 4th International Conference on Typography and Visual Communication (ICTVC), 15-19 June 2010, Nicosia, Cyprus. Παραδοτέα Αναφέρονται επιγραμματικά τα Παραδοτέα που προέκυψαν μέσα από τη συγκεκριμένη Δέσμη Εργασίας. Στα πλαίσια της Δέσμης Εργασίας 2, ως παραδοτέα περιλαμβάνονται οι δημοσιεύσεις σε επιστημονικά περιοδικά, η διοργάνωση τοπικών ημερίδων για παρουσίαση των αποτελεσμάτων του Έργου, οι παρουσιάσεις των αποτελεσμάτων του Έργου σε συνέδρια του εξωτερικού, οι αιτήσεις για κατοχύρωση πνευματικών δικαιωμάτων, κα. D8: The integration of the system with complete human figure modeling and facial expressions for realistic animations (software + scientific paper MCIS2011 - SIMPOL VR A VIRTUAL REALITY LAW ENFORCEMENT TRAINING SIMULATOR ) see WP6 D9: The complete system, together with its applicability, the training simulation system (software + scientific paper EUROMED 2012 - DIGITIZATION OF THE CYPRIOT FOLK DANCES ) see WP7 D10: Report based on experimental results see WP7
Τίτλος Δέσμης Εργασίας ΔΕ3: Platform Design Κωδικός Φορέα ΑΦ ΣΦ1 ΣΦ2 ΣΦ3 ΣΦ4 Ανθρωπομήνες για κάθε Φορέα (με βάση το Συμβόλαιο) Ανθρωπομήνες για κάθε Φορέα (δεδουλευμένοι) Στόχοι Δέσμης Εργασίας (όπως περιγράφονται στο Συμβόλαιο) Αναφέρονται επιγραμματικά οι στόχοι της παρούσας Δέσμης Εργασίας. 0.5 1.5 0.5 0.5 0.5 1.75 0.5 0.25 The main aim of WP3 has been the design of the software platform that has been used for implementing the software application components; together with its interface for the exploitation of the application. Περιγραφή Εργασίας- Βαθμός Υλοποίησης των στόχων της Δέσμης Εργασίας Καταγράφονται οι δραστηριότητες που εντάσσονται στη συγκεκριμένη Δέσμη Εργασίας (ΔΕ). Γίνεται εκτενής αναφορά στο βαθμό υλοποίησής τους, σε πιθανά προβλήματα που προέκυψαν και σε τυχόν αποκλίσεις από τους αρχικούς στόχους. Όπου εφαρμόζεται, γίνονται αριθμητικές και ποσοτικές αναφορές στα επί μέρους στάδια της ΔΕ και γίνεται, επίσης, σαφής προσδιορισμός στο Φορέα που ανάλαβε και διεκπεραίωσε την κάθε δραστηριότητα. In this work package we have established the software design that has been used for the implementation of the various software components comprising the training platform. We have primarily based this design on examination of the existing video-based training platform that the Cyprus Police Force is using, and consulted with experienced police trainers to understand the user requirements. We have also extensively reviewed, analyzed and taken into consideration the capabilities of different technologies, both software and hardware that could be used for the subsequent implementation of the platform. In addition, the platform design has been amended to facilitate the, now operational, Motion Tracking & Capture system at the Virtual Reality Lab of PA1, which was crucial for the successful implementation of the project. The design of the platform: takes into consideration the most important user requirements. allowed the implementation of desirable features in the new training platform. enabled the integration and support of necessary hardware devices. set the foundation of delivering an extensible and reusable software platform that can be potentially turned into a fully featured software product in the future.
Μεθοδολογία και Αποτελέσματα Περιγράφεται αναλυτικά η μεθοδολογική προσέγγιση που ακολουθήθηκε και αναλύονται τα αποτελέσματα που προέκυψαν μέσα από τη συγκεκριμένη Δέσμη Εργασίας. The extraction of the user requirements were the initial step in establishing the platform design. We have reviewed the commercial training system the Cyprus Police Force is using and compared it to the features possible in the proposed training platform. We subsequently produced a list of the pros and cons of the two systems, detailing the advantages of the VR-based platform (the one proposed and developed by the project) over the Video-based platform (the existing training system of the Police Force). We have described this in more detail in our interim report and have published it in D8. The software design took into consideration characteristics like: Usability, Extensibility, Modularity and Compatibility. To fulfill these software design requirements we have chosen to use a client/server architecture that utilizes the Model-View-Controller software design pattern in its implementation. See Figure 3.1 (the conceptual software design of the individual components and their interoperation) and Figure 3.2 (a more detailed diagram for the implementation of this software design) in Appendix Γ1 for further explanation. This design has enabled connecting software and hardware components and transmitting data over the network between the software components. The training platform is comprised of the scenario server, the trainee application and the trainer's user interface. The scenario server is the software application responsible for sending and receiving data to and from all other components, facilitating motion tracking and data asset management between the different software components. The trainee application is a real-time 3D virtual reality application that allows trainees to interact with the virtual training world. It is coupled with the motion tracking system, allowing users to navigate the virtual world from the real environment (e.g. by moving their head and limbs) and interact with the virtual objects (e.g. using a physical dummy firearm to affect the virtual world). Finally, the trainer's user interface enables a trainer to control the training session of the trainee by sending commands to the trainee's application (e.g. the trainer may control one of the virtual characters). Παραδοτέα Αναφέρονται επιγραμματικά τα Παραδοτέα που προέκυψαν μέσα από τη συγκεκριμένη Δέσμη Εργασίας. D5: Software s interface; setting up the platform A functional software platform has been developed, as part of WP7, based on the platform design of WP3. The platform provides the infrastructure for running training scenarios and enables communication between the different modules of the system (e.g. the trainee s and the trainer s user interfaces as well as the server holding the scenario content, such as 3D models, textures, motions etc).
Τίτλος Δέσμης Εργασίας ΔΕ4: Realistic Motion of Avatars Κωδικός Φορέα ΑΦ ΣΦ1 ΣΦ2 ΣΦ3 ΣΦ4 Ανθρωπομήνες για κάθε Φορέα (με βάση το Συμβόλαιο) Ανθρωπομήνες για κάθε Φορέα (δεδουλευμένοι) 11.0 11.0 Στόχοι Δέσμης Εργασίας (όπως περιγράφονται στο Συμβόλαιο) Αναφέρονται επιγραμματικά οι στόχοι της παρούσας Δέσμης Εργασίας. The combination of the Motion Capture data together with the Dynamics of motion for the realistic movement of the anthropomorphic figures has been the major research work of this work package. Περιγραφή Εργασίας- Βαθμός Υλοποίησης των στόχων της Δέσμης Εργασίας Καταγράφονται οι δραστηριότητες που εντάσσονται στη συγκεκριμένη Δέσμη Εργασίας (ΔΕ). Γίνεται εκτενής αναφορά στο βαθμό υλοποίησής τους, σε πιθανά προβλήματα που προέκυψαν και σε τυχόν αποκλίσεις από τους αρχικούς στόχους. Όπου εφαρμόζεται, γίνονται αριθμητικές και ποσοτικές αναφορές στα επί μέρους στάδια της ΔΕ και γίνεται, επίσης, σαφής προσδιορισμός στο Φορέα που ανάλαβε και διεκπεραίωσε την κάθε δραστηριότητα. After extended study for identifying the latest trends and state of the art in the area of motion control and synthesis we have identified the requirements for animating virtual characters using both Motion Capture and Dynamics. We have implemented one of the most popular algorithms for generating long animation sequences using smaller clips of motion captured data, namely the Motion Graphs technique. Additionally, we have experimented with various software libraries used by the research community for simulating rigid-body dynamics and have setup the virtual characters to support ragdoll simulation in certain situations, as well as inserted physics-controlled objects in our scenes. The final system we employ has the capability of running both motion capture and dynamic simulation.
Μεθοδολογία και Αποτελέσματα Περιγράφεται αναλυτικά η μεθοδολογική προσέγγιση που ακολουθήθηκε και αναλύονται τα αποτελέσματα που προέκυψαν μέσα από τη συγκεκριμένη Δέσμη Εργασίας. In this WP, we have identified the use cases of the two methods of motion control, namely Motion Capture data and Dynamics of motion. Briefly, Motion Capture data is appropriate for commonly performed motions sequences (e.g. walking, running etc), while Dynamics of motion control are more suitable for simulating motions that are difficult or impossible to capture (e.g. death or extreme circumstances). We have identified that the most suitable method to simulate long commonly performed human motions for the virtual characters of the training simulator, such as walking, running, standing, etc. was to capture and reuse motion data. This was necessary, as motion data are particularly difficult, time consuming and expensive to generate via other methods, such as interactive animation. Therefore, we have used motion capture data acquired in the Motion Capture facility of PA1, which were then attached to the skeletal structures of virtual characters to provide realistic motions in 3D. To combine these diverse recorded motions we have implemented the Motion Graphs algorithm, one of the most advanced methods for splicing motions together. This approach enables us to simulate very realistically most of the motions necessary in the training simulator. We have devised the necessary workflow for capturing realistic human motion using Motion Capture technologies in the Virtual Reality facility built at the premises of PA1. Through the project, PA1 has acquired and installed a highly accurate state-of-the-art Motion Capture and Tracking system in a dedicated room featuring a 3-screen wide projection wall. The HO has worked closely with PA1 to build the necessary software components required, such as motion tracking network services and real-time user interfacing for 3D applications. This software was combined with the 3D training platform to facilitate the need for real time interaction of trainees with the simulator. This involved tracking the head, legs and features a customized tracked dummy handgun. You can see photos of the VR facility in Figure 4.1, while in Figure 4.2 we show a photograph of a subject being motion captured to generate motion for the virtual characters. In addition, in Figures 4.3 and 4.4 we show the various custom equipment we have built to motion track the trainee user. Figure 4.5 shows the configuration of the marker-based rigid bodies and their testing. All figures can be found in the Appendix Γ1. However, capturing animation for every aspect of the training simulator is not possible, since some motions are either difficult to replicate in reality or due to the real-time interaction of the users with the dynamic environment, it is not possible to anticipate and have at hand motion captured sequences that appear realistic. For example, if a virtual character is shot and falls on an arbitrary object (e.g. a box), it is not possible to have a pre-recorded animation for this case. We have used ragdoll simulation (physically-based animation of articulated characters) to generate dynamically animation for avatars for some of these difficult cases that occur in the training scenarios, in particular when a virtual character is shot and thus falls unpredictably on the ground or on virtual 3D objects.
Παραδοτέα Αναφέρονται επιγραμματικά τα Παραδοτέα που προέκυψαν μέσα από τη συγκεκριμένη Δέσμη Εργασίας. D6: The Hybrid system for motion control. Full utilization of each method has been made possible and a system was set up to implement each one separately at prefix positions.
Τίτλος Δέσμης Εργασίας ΔΕ5: Automatic Transition within the Hybrid method Κωδικός Φορέα ΑΦ ΣΦ1 ΣΦ2 ΣΦ3 ΣΦ4 Ανθρωπομήνες για κάθε Φορέα (με βάση το Συμβόλαιο) Ανθρωπομήνες για κάθε Φορέα (δεδουλευμένοι) Στόχοι Δέσμης Εργασίας (όπως περιγράφονται στο Συμβόλαιο) 6.0 6.0 Αναφέρονται επιγραμματικά οι στόχοι της παρούσας Δέσμης Εργασίας. The aim of work package 5 has been the believable automatic transitions between the Motion Capture and the Dynamics method as well as the clear understanding when each is more appropriate. Περιγραφή Εργασίας- Βαθμός Υλοποίησης των στόχων της Δέσμης Εργασίας Καταγράφονται οι δραστηριότητες που εντάσσονται στη συγκεκριμένη Δέσμη Εργασίας (ΔΕ). Γίνεται εκτενής αναφορά στο βαθμό υλοποίησής τους, σε πιθανά προβλήματα που προέκυψαν και σε τυχόν αποκλίσεις από τους αρχικούς στόχους. Όπου εφαρμόζεται, γίνονται αριθμητικές και ποσοτικές αναφορές στα επί μέρους στάδια της ΔΕ και γίνεται, επίσης, σαφής προσδιορισμός στο Φορέα που ανάλαβε και διεκπεραίωσε την κάθε δραστηριότητα. After studying the two motion control methods, i.e. motion capture and dynamics of motion, it has been made clear when it is best (or even possible) to utilise each one of them. Through this WP, we have identified that motion-captured character animation and physicallybased animation are both necessary to simulate realistically virtual characters in the 3D training simulator. We use both methods for different situations, as described in WP4, which do not require the two animation techniques to be run in conjunction. We developed a simple method for switching between the two methods by using triggering events from the real world, as well as within the virtual world. To improve the realism of our scenes, we added physics properties to some of the non-character objects in the virtual scene, such as boxes, with which both virtual characters and the trainee may interact.
Μεθοδολογία και Αποτελέσματα Περιγράφεται αναλυτικά η μεθοδολογική προσέγγιση που ακολουθήθηκε και αναλύονται τα αποτελέσματα που προέκυψαν μέσα από τη συγκεκριμένη Δέσμη Εργασίας. Extended study in the area as well as experimentation with each technique separately, together with their efficient combination resulted in identifying when to use each method (see WP4). We automatically switch between the two methods using triggering events. Virtual characters appeared to the users more realistic when animated for the majority of a simulation using motion captured data. However in cases that characters would fall or fatally wounded, or when they collided with certain scene objects, motion captured sequences were less appealing as significant artifacts were visible. Therefore, we have utilized ragdoll simulation for character animation. We devised a simple, albeit effective, event triggering mechanism for switching between the methods which relies primarily on user interaction and in particular to the use of the trainee's firearm. When a virtual character is aimed and shot at, the physically-based animation technique takes over motion generation. In addition, scene objects that have physics properties and react to the virtual characters, as well as the human user, provide an increased sense of realism to the simulator by more appropriately offering better visual feedback for secondary motions that are physically correct. Switching between the two modes involves handing the motion generation task from the primary Motion Capture animation engine directly to the ragdoll simulator for special cases only. For example, when a virtual character is shot and falls on the ground. As virtual characters do not recover or re-emerge from these fatal poses, there has been no requirement for the animation control to be handed back from the ragdoll simulator to the Motion Capture animation engine. Figure 5.1 shows a user shooting at a virtual character that falls on the ground, where the switch from motion capture to dynamics happens. In Figure 5.2, we demonstrate some screenshots from a training scenario where different characters, run, walk or fight, while characters may fall on the ground, where dynamics again takeover motion control. Παραδοτέα Αναφέρονται επιγραμματικά τα Παραδοτέα που προέκυψαν μέσα από τη συγκεκριμένη Δέσμη Εργασίας. D7: Scientific paper (automatic transition). Due to the simplicity of the method for switching between captured motions and dynamics, as well as the requirements of the training scenarios for using physically-based animation only in certain cases, we have not established a switching method that could constitute a novelty and thus we have not produced a paper.
Τίτλος Δέσμης Εργασίας ΔΕ6: Body and Face Integration Κωδικός Φορέα ΑΦ ΣΦ1 ΣΦ2 ΣΦ3 ΣΦ4 Ανθρωπομήνες για κάθε Φορέα (με βάση το Συμβόλαιο) Ανθρωπομήνες για κάθε Φορέα (δεδουλευμένοι) Στόχοι Δέσμης Εργασίας (όπως περιγράφονται στο Συμβόλαιο) Αναφέρονται επιγραμματικά οι στόχοι της παρούσας Δέσμης Εργασίας. 1.5 1.5 1.5 1.5 1.5 1.5 The integration of the system with the realistic full body models of the human beings together with the integration with emotional expressions have been the main objectives of work package 6. Περιγραφή Εργασίας- Βαθμός Υλοποίησης των στόχων της Δέσμης Εργασίας Καταγράφονται οι δραστηριότητες που εντάσσονται στη συγκεκριμένη Δέσμη Εργασίας (ΔΕ). Γίνεται εκτενής αναφορά στο βαθμό υλοποίησής τους, σε πιθανά προβλήματα που προέκυψαν και σε τυχόν αποκλίσεις από τους αρχικούς στόχους. Όπου εφαρμόζεται, γίνονται αριθμητικές και ποσοτικές αναφορές στα επί μέρους στάδια της ΔΕ και γίνεται, επίσης, σαφής προσδιορισμός στο Φορέα που ανάλαβε και διεκπεραίωσε την κάθε δραστηριότητα. Given the nature of the project, it is important for the trainees to be able to interpret and/or understand the emotions of the suspect(s), so as to predict their next movement and therefore use this information to prevent aggressive behavior. Μεθοδολογία και Αποτελέσματα Περιγράφεται αναλυτικά η μεθοδολογική προσέγγιση που ακολουθήθηκε και αναλύονται τα αποτελέσματα που προέκυψαν μέσα από τη συγκεκριμένη Δέσμη Εργασίας. Integrating facial expressions animation as well as speech in the movement of the avatars used for the suspect(s) has been examined during the implementation of this work package. Different techniques have been tried out, including the use of a parameterized facial model as well as using mathematically modeled muscle deformations. The latter is a powerful technique that works very well for static facial expressions; however, it gets rather complicated when dealing with real-time movement in a dynamic 3D environment. A stand-alone software program has been implemented for generating dynamic facial expressions and has been tested in an experimental setting (see Figure 6.1 - in Appendix Γ1 - the standalone application used to generate facial expressions and apply muscles to any face). However, during this process, we came across LIFESTUDIO: HEAD Artist, a high-end software tool kit that enables complete integration with any application for high-quality realtime facial animation, and automated real time lip-sync from multiple sources. The fact that using this software you can automate both the lip synchronization (with natural mouth movement) process as well as the generation of facial expressions has urged us to buy a license of the software and try it out to see if we could get better results. Moreover, the accompanying
Muscles Setup Plugins allows you to integrate real time speech, movement and expressions to your own model including human faces, animals, objects (e.g. a car) or any other creature. This Muscles Setup plugin can be used in both 3DStudio Max and Maya. With the Pro license you can even give the notion of movement on facial hair as well a taunting task otherwise. Figure 6.2 Appendix Γ1, shows an image from operating 3dsMax to modify a particular model and integrate it to a head. Figure 6.3 shows an image from LifeStudio: Head in an attempt to transfer the set of muscles from a source to a destination model. There is a broad usage of LifeStudio: HEAD Artist software around, one of which is the famous funny video creator known as Roidder (www.youtube.com/user/roidder). Figure 6.4 Appendix Γ1 shows an image from Roidder s video The Nightly Potato Ep. 5.5 in which Barack Obama and his wife Michelle are shown watching a Hilary s Clinton speech. The following table shows the implementation steps followed to create and extract a human face capable of speech, natural mouth movement and showing emotions. The extracted modeled was then given for integration with the rest of the platform. A. Audio B. LS:Head Editor C. 3ds Max 2009 1. Generation of 3. Modeling of a 8. Import gdp file from sound and speech specific human Lifestudio 2. Storing of sound in wav format face 4. Integrating Speech 9. Integrate speech (mma and mms files) 5. Lip and speech 10. Integration of face with synhronisation the rest of the body 6. Generation of 11. Eye Tracking Generation movement and expressions synchronized with speech 7. Storing the model in gdp format and extracting the mma and mms files associated with speech. 12. Adding movement (bvh type) 13. Storing the model as max type to be imported to Unity Table 6.1: Table showing the implementation steps followed. For more information please refer to the undergraduate dissertation of Eleni Hadjidemetriou (May 2012 UCY): Facial Animation Using LifeStudio: HEAD An investigation of surface modeling methods have also been carried out in order to find the most suitable to combine the facial expressions to the human bodies. The human body is extremely complex from its nature and cannot be defined by a regular rectangular grid of
vertices, and this is the reason why particular attention was given on surface smoothing methods which can generate surfaces defined by an arbitrary topology of vertices. A comparison of such methods was necessary in order to find the most suitable one to introduce additional realism to the characters. Surface subdivision methods have been employed and particularly the Catmull & Clark method which is the most popular one for surface smoothing. Subdivision surface algorithms owe their evolution to the weakness of traditional spline methods to generate surfaces that are not defined by a regular rectangular grid. This drawback inspired researches to devote a large amount of research for the development of subdivision methods, which can handle surfaces on arbitrary topological meshes. The Catmull-Clark method is based on a recursive B-spline patch subdivision algorithm. It constructs a new set of points at each step with more vertices and smaller faces than the original set of points (control points), and after a small number of iterations around four the new vertices fall on the actual surface defined by the control points, resulting on a smooth figure (Figure 6.5 Appendix Γ1). The method can be applied to any topology of points. After the first iteration all faces have four edges and the number of extraordinary points becomes fixed. Also, after each iteration the surface results in a further definition of a B-spline surface. The extraordinary points are the points that do not have four edges incident to them. They have attracted a great deal of attention by researchers who proved that at least tangent continuity exists on these points. The very good results that this method produces make it one of the best existing today for generating surfaces which are based on arbitrary topology of control points. A modeler has been developed that introduces additional detail on the models using the Catmull and Clark method and particular emphasis was given in order to implement it and be timeefficient. Initially, the method was developed using a database for each edge, each vertex and each face. For surfaces with less than 1,000 vertices the generation of the surface was done very fast but with more than 1,000 vertices it was becoming cost inefficient especially as the number of vertices was increasing. It must be noted that the number of vertices after every iteration increases by four times. A second set of routines was then developed that reduced the executable time approximately by 25 times. This was based on the new defined program data structured that combined the vertex, edge and face databases. Introducing additional detail on a human face using the modeler is illustrated in Figure 6.6 - Appendix Γ1, where (b) two subdivisions have been applied in an initial face (a).
Some time during the second half of the project, with the final installation of the new VR system at the Graphics and Hypermedia lab of PA1, there was a change in the software VR platform used by the lab. Panda3d has been replaced by Unity3d. Although that meant that a lot of the software described in this report had to be re-written to run on the new platform, on the positive side, Unity3D integrates a lot of the functionality that we have been aiming for. It can deal, up to a certain level, with facial animation as well as level of detail and tessellation. As a result we have considered porting our LIFESTUDIO code in Unity an unnecessary overhead. Παραδοτέα Αναφέρονται επιγραμματικά τα Παραδοτέα που προέκυψαν μέσα από τη συγκεκριμένη Δέσμη Εργασίας. D8: Complete software consisting of the full integration of modeling, motion control and facial expressions D8: A paper that has been published in the 6 th Mediterranean Conference on Information Systems (MCIS 2011) held in Limassol, Cyprus between the 3 rd to the 5 th of September 2011. The title of the paper was: SIMPOL VR A VIRTUAL REALITY LAW ENFORCEMENT TRAINING SIMULATOR
Τίτλος Δέσμης Εργασίας ΔΕ7: Platform Development and Testing Κωδικός Φορέα ΑΦ ΣΦ1 ΣΦ2 ΣΦ3 ΣΦ4 Ανθρωπομήνες για κάθε Φορέα (με βάση το Συμβόλαιο) Ανθρωπομήνες για κάθε Φορέα (δεδουλευμένοι) Στόχοι Δέσμης Εργασίας (όπως περιγράφονται στο Συμβόλαιο) Αναφέρονται επιγραμματικά οι στόχοι της παρούσας Δέσμης Εργασίας. 2.0 10.5 0.5 1.0 2.0 11.25 0.5 0.25 The application of the proposed work, i.e. the implementation of the pilot tool that will help the training of the Cyprus Police Force has been the aim of this work package. Περιγραφή Εργασίας- Βαθμός Υλοποίησης των στόχων της Δέσμης Εργασίας Καταγράφονται οι δραστηριότητες που εντάσσονται στη συγκεκριμένη Δέσμη Εργασίας (ΔΕ). Γίνεται εκτενής αναφορά στο βαθμό υλοποίησής τους, σε πιθανά προβλήματα που προέκυψαν και σε τυχόν αποκλίσεις από τους αρχικούς στόχους. Όπου εφαρμόζεται, γίνονται αριθμητικές και ποσοτικές αναφορές στα επί μέρους στάδια της ΔΕ και γίνεται, επίσης, σαφής προσδιορισμός στο Φορέα που ανάλαβε και διεκπεραίωσε την κάθε δραστηριότητα. The development and the testing of the training platform have been performed based on the platform designs that have been created in WP3. We have designed a set of scenarios with different features to enable studying how users are affected by different levels of realistic character motion and behavior in our training platform. In the proposal of the project the experimental testing and evaluation (which has been assigned for the second half of the project) has been intended to be done by PA4, i.e. a member of the Cyprus Police Force. However, due to the fact that the project in its current configuration represents a prototype format, plus the fact that the policeman that we have been communicating with, namely Mr. Aristos Chrysanthou, has actually left the responsible department (some time after the interim report) the testing has been done by PA1 and in particular by Dr. Yiorgos Chrysanthou. Therefore, the man/months and the timesheets have been changed accordingly. We are currently in the process of obtaining further experimental data to collate our findings in a scientific publication. Μεθοδολογία και Αποτελέσματα Περιγράφεται αναλυτικά η μεθοδολογική προσέγγιση που ακολουθήθηκε και αναλύονται τα αποτελέσματα που προέκυψαν μέσα από τη συγκεκριμένη Δέσμη Εργασίας. As described in the Interim Report, the platform has 4 main software applications/components: The Scenario Server, the Hardware Abstraction Layer, the Trainee Client and the Trainer Client. Briefly: