Posts Tagged ‘Howard Chizeck’

Microsoft’s Kinect could assist in your next surgery

Tuesday, January 18th, 2011

UW students adapt gaming hardware for robotic surgery

(The Daily) – A group of graduate engineering students have adapted Microsoft’s new Kinect technology for a surprising purpose: surgical robotics.

The method involves using the Kinect (an array of cameras and sensors that allow video-game users to control their Xbox 360s with their bodies) to give surgeons force feedback when using tools to perform robotic surgery.

“For robotics-assisted surgeries, the surgeon has no sense of touch right now,” said Howard Chizeck, UW professor of electrical engineering. “What we’re doing is using that sense of touch to give information to the surgeon, like ‘You don’t want to go here.’”

Currently, surgeons commonly use robotic tools for minimally invasive surgeries. Tubes with remotely controlled surgical instruments on the ends are inserted into the patient in order to minimize scarring. Surgeons control the instruments with input devices that resemble complex joysticks, and use tiny cameras in the tubes to see inside the patient.

The problem is, however, that surgeons have no realistic way to feel what they are doing. If they move a surgical instrument into something solid, the instrument will stop but the joystick will keep moving.

Electrical engineering graduate student Fredrik Ryden solved this problem by writing code that allowed the Kinect to map and react to environments in three dimensions, and send spatial information about that environment back to the user.

This places electronic restrictions on where the tool can be moved; if the actual instrument hits a bone, the joystick that controls it stops moving. If the instrument moves along a bone, the joystick follows the same path. It is even possible to define off-limits areas to protect vital organs.

“We could define basically a force field around, say, a liver,” said Chizeck. “If the surgeon got too close, he would run into that force field and it would protect the object he didn’t want to cut.”

At first it was suggested that presurgery CT scans be used to define these regions. However, Howard’s group came up with the idea of using a “depth camera,” a sensor that detects movement in three dimensions by measuring reflecting infrared radiation to automatically define those regions. At a meeting on a Friday afternoon in December, a team member suggested using the newly released Kinect.

“It’s really good for demonstration because it’s so low-cost, and because it’s really accessible,” Ryden, who designed the system during one weekend, said. “You already have drivers, and you can just go in there and grab the data. It’s really easy to do fast prototyping because Microsoft’s already built everything.”

Before the idea to use a Kinect, a similar system would have cost around $50,000, Chizeck said.

[Full article here at]