METAINTELLIGENCE SPACE

View Original

Embedding Intelligence into Virtual Environments

The computing power and 3D graphics software that is currently available works amazingly in tandem with maturing AI technologies. This is to a great advantage when designing virtual environments, introducing highly intelligent and autonomous agents into systems. This is a convergence of development which allows for higher visual realism, but foremost the capability to add intelligence.

The fact that 3D graphics standards are expanding is a major factor contributing to virtual environment usage. But at the same time, we are at the point when virtual environments and advanced graphics are endeavoring to achieve more than just aesthetically pleasing but otherwise lifeless environments. To do so, there is an attempt to incorporate components from physicality which necessitate intelligent responses – such as populating urban models with crowds, developing virtual humans or actors, creating virtual non-humans or modeling behavior and artificial intelligence to a higher level of representativeness within VE tools.

This article considers the main point of convergence regarding intelligent virtual environments, with particular attention given to agents occupying either end of the spectrum.

Constraints

Virtual environments (VEs) and robotics share the need to adhere to real-time constraints when processing. In a cycle, the renderer processes a scene graph which is usually a hierarchical composition of components within the VE represented by nodes of varying types linked together. The more complex the scene graph, the longer it will take to traverse, thus decreasing the frame rate.

To avoid this problem, there are efforts to create visually pleasing components with the least amount of polygons possible. Additionally, Level of Detail (LOD) mechanisms allow components to be depicted with greater detail as the user gets closer. This way, we can ensure that at least 50 or 60 Hertz frame rate remains so any changes appear as smooth animation rather than sudden jerks for an optimal feeling of presence.

Robotics and VEs use processing power for basic cycles, however, when intelligence is added to these systems, it takes away from the processing. This is especially true if one processor is used. Even if extra processors are used to parallel process, this can have the same effect if not done with the precise frame rate. As a result of this lack of processing power, many VEs developed in research labs must be rendered off-line instead of interactively as a real-time animation, precluding them from being used. Although technology has improved, which makes it possible to add intelligence to VEs, there still isn't enough power for many of these systems created in laboratories.

VE TOOLS

It is clear that tools and development environments — as with other sophisticated systems — play a crucial role in the progress of this field because of the combination of techniques necessary to drive this combination of advanced technologies. Following are the issues to consider, including (1) how abstract the support is, (2) how knowledge is represented, and (3) how complex properties are integrated.

1. SUPPORT

Even more constraining is the tendency of most generally available VE toolkits to focus on visual realism and graphical support rather than adding intelligence to the VE. At the most basic level, one might design a system with a 3D library and C++. Usually, you have to decide whether you want to sacrifice time and effort in order to gain some flexibility. The concept of an agent is quite vague here compared to the notion of a virtual agent that will be discussed later on.

VE toolkits use the scene-graph representation already discussed as the next level of abstraction. In the scenegraph, leaf nodes normally represent graphical primitives as polygons, so this is an easy way to represent graphical aspects of objects. By attaching components to group nodes, primitives are then grouped into more complex graphical objects.

2. KNOWLEDGE REPRESENTATION

If we want to manipulate objects on a knowledge-level and attach knowledge to them, a scene-graph representation likely won't suffice. Therefore, it's time for VE toolkit designers to seriously consider integrating explicit knowledge representation facilities. AI has much to offer in this area — from avoiding reinventing the wheel with many knowledge representations.

Toolkits tend to favor the graphical representation and user's visual point of view. Included facilities are provided to detect collisions with environment objects using bounding boxes. Animations come pre-programmed based on trajectories calculated in advance. There is no possibility of autonomous motion for objects driven by virtual sensors either.

VE toolkits provide more than merely sensors – they are tailored to detect user interaction, such as generating an alarm when a wall is hit or triggering events from a mouse-click. Furthermore, interesting AI behavior needs to be programmed via any language the toolkit supports: typically C++ for proprietary products and Java for VRML.

3. INTERACTION WITH COMPLEX PROPERTIES

If we want to give objects in a VE more complex properties than visual characteristics, we are faced with the issue of how they interact with the VE and with each other. Visual interactions typically take place between objects and users in terms of texture, lighting effects, level of detail, etc. Unfortunately, in a standard environment these objects don't usually interact much more than by one hiding from the user's view. But when things become more intricate, the number of interactions between the objects and the VE itself swiftly increases. In such cases it becomes necessary to consider if these interactions should be determined mostly by the object or by its setting.

No answer is totally satisfactory, and an interesting course of action is for the object to contain the properties and know-how of how it interacts with its surroundings. Goldberg's 'inverse causality' does just that – it puts animations describing object-actor interactions inside the object itself, so that the actor needs no further training to, say, pick up a virtual bottle of beer and drink from it.

It may appear paradoxical to bring the physics of gravity in a VE. All items existing in such an environment should abide by rules of physics, be it downward fall because of gravity or floating under no-gravity conditions. An example might be a fish placed into a VE not containing water. To create a realistic impact for objects within the VE, properties like force are to be used. Work on common ontology in Artificial Intelligence can also contribute significantly towards this end.

The difficulties identified here would have been very surprising if the difficulties mentioned here had not existed, given that VE toolkits were never designed to support the kind of functionality that is being discussed. In order to benefit from a collaboration between AI and VE communities, a new generation of VE tools is an enterprise that could profitably be undertaken.

TAKEAWAY

While advances in 3D graphics and AI are enabling more realistic and intelligent virtual environments, there are still significant challenges. Massive processing power is required to achieve real-time interaction and true intelligence within VEs. Current VE tools focus primarily on visual realism rather than adding advanced functionality like knowledge representation and simulation of complex object properties.

To truly take advantage of the convergence between AI and virtual reality, a next generation of VE tools is needed that integrates knowledge representation facilities, support for complex object properties, and ontologies. This would allow for objects in VEs to have intelligent behaviors driven by their own knowledge and properties, rather than being pre-programmed.

VEs and AI must also contend with real-time constraints that limit processing power available for rendering graphics and complex AI tasks. Improvements in processing speed, parallel processing and optimization techniques will be essential to unlock more intelligent and interactive VEs.

Overall, while very promising, using AI to create truly intelligent virtual environments remains an ambitious goal that will require advances on multiple fronts. But as graphics capabilities continue to mature and AI technologies progress, we are moving steadily closer to more natural and seamless integration between the two fields. With the right developments in VE tools, standards and processing approaches, we may eventually achieve the vision of virtual worlds populated by autonomous, intelligent agents that rival our own.

Title image credits: Xponential design