An Augmented Reality Based Simulator for
This project is the simulator of a 4-D Sonography with an augmented reality system for better understanding of developing fetus and early detection of abnormalities in the fetus and finding precise surgical procedure to reduce risk factor in surgery. This simulation system comprised auditory feedback, and provided important physiological data values of blood pressure, heart rates, and oxygen supply, necessary for training physicians before the final operation is to be performed. This plays an important role in increasing the eﬃciency of the training, since the physician now can practice on the augmented simulator before performing it on the subject directly concentrates on the vaginal delivery rather than the remote computer screen. In addition, forceps are modeled and an external optical tracking system is integrated in order to provide visual feedback while training with the simulator for complicated procedures such as forceps delivery.
About the Project
Our team is working to develop and operate a system that allows a physician to see directly inside a patient, using augmented reality (AR). AR combines computer graphics with images of the real world. This project uses ultrasound echography imaging, laparoscopic range imaging, a video see-through head-mounted display (HMD), and a high-performance graphics computer to create live images that combine computer-generated imagery with the live video image of a patient. An AR system displaying live ultrasound data or laparoscopic range data in real time and properly registered to the part of the patient that is being scanned could be a powerful and sensitive tool that could be used to assist and to guide the physician during various types of ultrasound-guided and laparoscopic procedures.
Augmented Reality (AR) is a growing area in virtual reality research. An augmented reality system generates a composite view for the user. It is a combination of the real scene viewed by the user and a virtual scene generated by the computer that augments the scene with additional information. The wide scope of application domains reveals that the augmentation can assume a number of different forms. Therefore, we compare different technologies for augmented reality visualization with the focus on video see-through head-mounted displays, optical see-through head-mounted displays, virtual retinal displays and spatial display.
Different Stages of Implementation
- Implementation of AR environment using a virtual object
- Implementation of edge detection algorithm
- Implementation of motion to 3D object by touching the virtual object
- Generating 3-d image form 2D images obtained form sonography
- Implementation of virtual uterus and fetus
- Tracking motions of the womb for generating real-time virtual Womb
- Adding different layers of the womb and other physiological information to virtual womb in layers
- Detection of internal and external abnormalities of the real fetus and highlighting it in virtual system
- Implementation of surgery simulation
- Tracking the moments of surgeon’s forceps and its implementation in virtual system
- Giving real world looks and effects to the virtual system
- Designing and implementing special h/w for practical implementation on site
Medical Augmented Reality Systems
The first medical application of AR was to neurosurgery have been developed independently. AR has also been applied to otolaryngology. These applications demand less of The AR system than laparoscopy for four reasons. The surgical field is small, the patient doesn't move, the view into the patient is from a single viewpoint and view direction, and this viewpoint is external to the patient. This simplifies the difficult task of building an enhanced visualization system.
Medical applications of AR have until recently concentrated on ultrasound-guided procedures. In the latter system, the ultrasound data is captured as a video stream and registered to the patient in real time. The physician's head must be tracked in order to view the dynamic data from any direction. We calibrate the location of the ultrasound data with respect to the probe geometry and track the probe location. These two tasks enable registration of multiple discrete slices to each other and registration of the ultrasound data set to the patient.
The major new technology needed for laparoscopic visualization is acquisition of the depth map associated with the image from the laparoscopic camera. Determination of 3D scene structure from a sequence of 2D images is one of the classic problems in computer vision. There are numerous techniques for computing 3D structure, including cues from motion, stereo, shading, focus, defocus, contours, and structured light.
There are four primary hardware components to our system. Three are the standard tools of AR systems: an image generation platform, a set of tracking systems, and display technique. The fourth component required for this application is a 4D sonography that can acquire both color and range data.
We needed an image generation platform capable of acquiring multiple, real-time video streams. We need to loads video imagery from the cameras on the display directly into the frame buffer. We augment this background image with a registered model of the patient's skin acquired during system calibration.We then render the synthetic imagery in the usual manner for 3D computer graphics. At pixels for which there is depth associated with the video imagery (e.g. the patient's skin), the depth of the synthetic imagery is compared. The synthetic imagery is painted only if it is closer. This properly resolves occlusion between the synthetic imagery and the patient's skin.
We use Webcams for tracking the physician's movements. It offers a high update rate, a high degree of accuracy, and a large range of head positions and orientations. The large range allows the physician to move freely around the patient and to examine the patient from many view-points.
Depth Calibration and Extraction
We measure the reflected sound pattern for a set of known depths and store the results in a table. By imaging each potential sound from the projector onto a flat grid at a known depth, we can determine the 3D location of the point at each pixel in the camera image. With several depths, we can build a table indexed by the column number from the projector and the x and y-coordinate on the camera image plane. At each cell in the table is a 3D point. Simple thresholding determines which pixels in the camera image are illuminated by the light stripe generated by reflected sound.
Video-See-Through Head-Mounted Display or Spatial Display
Head-Mounted Display (HMD) places images both the physical world and registered virtual graphics over the user’s view of the world. The HMD’s are see-through or video see-through in nature. It has six degree of freedom sensor. The main advantage of HMD AR is the immersive experience for the user. The graphical information is slated to the view of user.
Spatial Display, Spatial Augmented Reality (SAR) makes use of digital projectors to display graphical information onto a physical object. The difference is that SAR is separated from the system user as the display is not associated with each user. SAR scales naturally up to group of user.
Benefits of Augmented Reality
Augmented reality (AR) refers to systems that attempt to merge computer graphics and real imagery into a single, coherent perception of an enhanced world around the user. Emerging AR technologies have the potential to reduce the problems caused by the visual limitations of laparoscopy. The AR system can display the resulting 3D imagery in the proper place with respect to the exterior anatomy of the patient. By acquiring depth information and rendering true 3D images of the structures visible in the 4D sonography view & laparoscopic camera, the AR system gives the physician most of the depth cues of natural vision. (Exceptions include focus and visual acuity.) The display of the laparoscopic data is not limited to the current viewpoint of the camera, but can include data acquired from a previous camera location.Thus objects not currently within view of the camera can still be displayed by the AR system.
We want to emphasize that this technology is fundamentally different than coupling a stereo laparoscope with a stereo display system. AR systems allow the surgeon to view the medical imagery from the natural viewpoint, use head-induced motion parallax (instead of hand-eye coordination and camera-induced motion parallax), allow the medical imagery to be visually aligned to the exterior anatomy of the patient, and incorporate body-relative cues.
The lack of depth perception in laparoscopic surgery might limit delicate dissection or suturing. An AR display presents objects in correct perspective depth, assuming that the geometry has been accurately acquired. With an AR guidance system, a laparoscopic surgeon might be able to view the peritoneal cavity from any angle merely by moving his head, without moving the endoscopic camera. AR may be able to free the surgeon from the technical limitations of the imaging and visualization methods, recapturing much of the physical simplicity and direct visualization characteristic of open surgery.Requirements
Ø Hardware Requirements:
· High speed computer
· Graphic Card
· Web cams
· CC TV cams
· Special display unit
· 4d-Sonography machine etc;
Ø Software Requirements
· Java 1.6 & above
· NetBeans IDE 6.1 and Above
· JMF(Java Media Framework)
· Java 3-D Architecture
Team Member: Ayan Sinha, Rahul Raja, Rishikesh Jagdale, Ketan Barapatre.
Team Member: Ayan Sinha, Rahul Raja, Rishikesh Jagdale, Ketan Barapatre.