Active Vision II (Continou)

The Active Camera Network Approach[edit]
In recent years there has been growing interest in building networks of active cameras and optional static cameras so that you can cover a large area while maintaining high resolution of multiple targets. This is ultimately a scaled-up version of either the master/slave approach or the autonomous camera approach. This approach can be highly effective, but also incredibly costly. Not only are multiple cameras involved but you also must have them communicate with each other which can be computationally expensive.[14]


Controlled Active Vision Framework[edit]
Controlled active vision can be defined as a controlled motion of a vision sensor can maximize the performance of any robotic algorithm that involves a moving vision sensor. It is a hybrid of control theory and conventional vision. An application of this framework is real-time robotic servoing around static or moving arbitrary 3-D objects. See Visual Servoing. Algorithms that incorporate the use of multiple windows and numerically stable confidence measures are combined with stochastic controllers in order to provide a satisfactory solution to the tracking problem introduced by combining computer vision and control. In the case where there is an inaccurate model of the environment, adaptive control techniques may be introduced. The above information and further mathematical representations of controlled active vision can be seen in the thesis of Nikolaos Papanikolopoulos.[15]

Examples[edit]
Examples of active vision systems usually involve a robot mounted camera,[16] but other systems have employed human operator mounted cameras (AKA "wearables").[17] Applications include automatic surveillance, human robot interaction (video),[18][19] SLAM, route planning,[20] etc. In the DARPA Grand Challenge most of the teams used LIDAR combined with active vision systems to guide driverless vehicles across an off road course.

A good example of active vision can be seen in this YouTube video. It shows face tracking using active vision with a pan-tilt camera system. http://www.youtube.com/watch?v=N0FjDOTnmm0

Active Vision is also important to understand how humans.[9][21] and organism endowed with visual sensors, actually see the world considering the limits of their sensors, the richness and continuous variability of the visual signal and the effects of their actions and goals on their perception.[8][22][23]

The controllable active vision framework can be used in a number of different ways. Some examples might be vehicle tracking, robotics applications,[24] and interactive MRI segmentation.[25]

Interactive MRI segmentation uses controllable active vision by using a Lyapanov control design to establish a balance between the influence of a data-driven gradient flow and the human’s input over time. This smoothly couples automatic segmentation with interactivity. More information on this method can be found in.[25] Segmentation in MRIs is a difficult subject, and it takes an expert to trace out the desired segments due to the MRI picking up all fluid and tissue. This could prove impractical because it would be a very lengthy process. Controllable active vision methods described in the cited paper could help improve the process while relying on the human less.


EmoticonEmoticon