Lecture 6

Virtual Reality Modelling Language

6.1. Events and routes

We mentioned before that a VRML file acts a blueprint for our virtual world, and that essentially this file was a static description of how the 3D scene will be built. In order to create a dynamic world we include "wiring" instructions within this blueprint to state how nodes will be connected together. Using this wiring we can connect useful nodes together, such as connecting a TouchSensor node (which senses user mouse actions over an object) to the on/off flag for a SpotLight node. Within VRML this wiring is performed using routes and node events. We can define a route to connect between the event inputs and outputs of VRML nodes. When two nodes are routed together, the first node can send a message or event to the second node along this route. By wiring together multiple nodes you can create complex circuits which can be used to give your VRML world a dynamic nature. All of the examples used in previous lectures which involved some form of animation, such as illustrating a rotation, were all performed using routed nodes.

6.1.1. Node events

Most types of VRML nodes can be routed together to form a circuit. If you have a look at the VRML specification for many of the nodes you will notice that alongside the various field declarations are a number of eventIn and eventOut declarations. These are the input and output connections for the node. By routing together an eventOut from one node to the eventIn of another node you are creating a circuit between those nodes. An eventOut sends events out when it is connected to a route, and an eventIn receives any events sent to it. If we now look at the full definition for a node, there are a few more things we can explain now. For example, we will look at the transform node definition.

  eventIn      MFNode      addChildren
  eventIn      MFNode      removeChildren
  exposedField SFVec3f     center           0 0 0
  exposedField MFNode      children         [ ]
  exposedField SFRotation  rotation         0 0 1 0
  exposedField SFVec3f     scale            1 1 1
  exposedField SFRotation  scaleOrientation 0 0 1 0
  exposedField SFVec3f     translation      0 0 0
  field        SFVec3f     bboxCenter       0 0 0
  field        SFVec3f     bboxSize         -1 -1 -1

Now, if we look at the left hand side of each line, we can see what type of definition it is. In this node we are declaring two eventInís, 6 exposed fields and 2 fields. Notice that each of these also has a field or event type such as MFNode or SFRotation, even events must be of a particular type and you cannot route together an eventOut of one type to an eventIn of a different type.

When we spoke about the fields of a node in previous lectures, we classed them as all the same. In the above definition there are in fact two types, field and exposedField which have a subtle difference. Anything declared as an exposedField has two implicit events created for it, whereas anything declared as a field does not. What this means is that a normal field must be specified in the VRML file, but an exposedField can be routed to and itís values changed dynamically in the virtual world.

Each exposedField has one eventIn beginning with name "set_" and one eventOut ending with "_changed" associated with it. So, in the Transform node, the scale field will have two events associated with it, an eventIn called "set_scale" and an eventOut called "scale_changed".

6.1.2. Routing between events

You can build up a VRML circuit by routing the eventOut from one node to the eventIn of another node. The event types must match exactly, so you cannot connect an MFFloat event to MFColor or SFFloat event. When you send an event to a node, the reaction of the node depends on :

As an example, if you send an SFBool event with a TRUE value to the "set_on" eventIn of a light node, then the light will turn on. Similarly, if you send a floating point value to the "set_intensity" eventIn of the light node then the lightís brightness will change.

In order to wire up two nodes, we need to be able to refer the node somehow. This is where the DEF keyword and node naming come into use again. To route between two nodes, you must define a name for those two nodes. To create a route between nodes, we use the "ROUTE" keyword (must be in capitals) then specify the node name and eventOut name we are routing from, followed by the "TO" keyword, then the node name and eventIn name we are routing to. As an example, if we assume that we have a TouchSensor node defined with thename "Touch" and a PointLight node defined with the name "Light" then we could create a route between them as follows:

ROUTE Touch.isActive TO Light.set_on

Note the syntax of this route declaration. We specify the route using the keyword "ROUTE". We then list the name of the node we are routing from, "Touch". The dot following this name separates the node name from the event we are routing to/from. So the statement "Touch.isActive" says we are connecting to the event "isActive" on the node named "Touch". We then use the keyword "TO" to separate where we are routing from, and where we are routing to. Finally we give the node name and event name which we are routing to. Thatís all there is to it. The normal place to put route declarations is at the bottom of the VRML file.

By wiring up various nodes we can create dynamic worlds and animation, and allow the user to interact with our worlds by sensing their actions. By also creating scripts which process and generate events we can define extremely complex behaviours.

6.2. Keyframe animation and interpolators

In order to provide animation in our VRML worlds we need to understand the mechanism of the interpolator nodes it provides. The two main interpolators we will look at are the PositionInterpolator and the OrientationInterpolator nodes. As you can probably guess, the former can be used to animate the position of an object (though it does more!) and the latter can be used to animate its rotation.

A very important point is the time frame used for these interpolator nodes. In VRML it is much easier to specify animation time as a fractional time value. Basically, the time allocated for an entire animation is 1.0 unit. The animation begins at time 0.0 and ends at time 1.0. All values in between are during the animation. So, we need to plan out our animation within this fractional time. If we are planning on a continuously (cyclic) animation, e.g. a plane flying round and round all the time, then we ensure that the starting point at time 0.0 is the same as the ending point at time 1.0. Thus, when the fractional time loops round the animation will appear continuous.

In order to create an animation, we need to specify the new position or rotation of an object for each step in time. One way of doing this would be to have a complete table of values for time steps 0.00, 0.01, 0.02, 0.03 Ö 1.00. This is not exactly a feasible solution as there are an infinite number of time steps between the values 0.0 and 1.0. A much better solution is to use keyframe animation. In this case we only specify the new position or rotation of an object at a few, key fractional times. These values are termed key values. The VRML interpolator nodes use these key frames to calculate the new positions for all the time steps between key values. Using this technique we can specify complex animation using only a small number of key times and key values.

As an example, look at figure 1 below which shows the X and Y positions of an object over the fractional time 0.0 to 1.0. We have specified 4 key value positions, but the VRML browser interpolates between these key values to create a smooth movement.

Figure 1. Illustrating the movement of an object under keyframe animation.

If we create a table of these keyframes it would be as follows :

Fractional time

Key position (X, Y)


0.0, 0.0


2.0, 3.0


3.0, -3.0


5.0, 0.0

The same principles apply for the OrientationInterpolator node except that it produces a rotation axis and rotation angle instead of an XYZ position.

6.2.1. The PositionInterpolator node

If we look at the definition for the PositionInterpolator node we will see how we could create an animation path as shown above.

  exposedField MFFloat key              []
  exposedField MFVec3f keyValue         []
  eventIn      SFFloat set_fraction 
  eventOut     SFVec3f value_changed

The syntax for this node is extremely simple and functional. The only two fields we need to specify are the key and keyValue fields. These specify a list of key times and the corresponding key values for those times. If we take the above example then we would construct our PositionInterpolator as follows. Note that the commas are used purely for readability

#VRML V2.0 utf8

# First enter the key times
# Now the XYZ key values at each key time
# Note that we must specify a Z value even though
# we are not using it.
     0.0  0.0  0.0,
     2.0  3.0  0.0,
     3.0 -3.0  0.0,
     5.0  0.0  0.0

To use this PositionInterpolator we would need to route other nodes to it. The two events provided allow us to do this. The eventIn set_fraction allows us to specify a new fractional time value. Using this time value the PositionInterpolator will calculate the appropriate XYZ values and send them using the eventOut value_changed. We can then route this value_changed eventOut to the set_translation eventIn of a Transform node.

6.3. Animation

If we want to add movement to our world we can animate the position, orientation and scale of any nested co-ordinate system we have created. As this co-ordinate system changes then so will any objects built within it change. So if we animated a rotation of a co-ordinate system, then all objects and other nested co-ordinate systems built within it will also rotate.

In VRML, to create any form of animation we must route together several nodes to form a small circuit. Let us take the example of rotating a co-ordinate system when the user clicks on an object (similar to many of the examples I have used in the lectures). To do this we need to route together 4 nodes. Firstly we need to have a switch which the user can click to start the animation - this is provided by the TouchSensor node. We connect this node to a clock which will provide the timing for our animation. The clock is defined with a TimeSensor node, our switch will start the clock up. The output from the clock is then routed to the animation controller. This controller takes an input value or time and produces a new position, rotation or whatever for the object we are animating. For animating rotations we use the OrientationInterpolator node. Finally, we connect this node to the Transform node of the co-ordinate system we want to animate. This may seem very convoluted, but it provides immense flexibility. The arrangement is shown below in figure 2.

Figure 2. Routing between nodes to provide an animation circuit.

We will now look in detail at each of these nodes, so we can see how this circuit would be built.

6.3.1. The TouchSensor node

The TouchSensor node senses a userís mouse movements and actions over objects which it is related to. You can use the TouchSensor to create switches or animation which will only activate when the user wants them to.

The definition for the TouchSensor node is as follow. Note that we are only showing the main fields and events we are interested in here :

  exposedField SFBool  enabled      TRUE
  eventOut     SFBool  isActive
  eventOut     SFBool  isOver
  eventOut     SFTime  touchTime

To use a TouchSensor node you simply include it within the children list of a Group or Transform node. It will then affect any objects in the same group, or in any nested groups, unless overridden by another sensor further down the nesting hierarchy. There is only one field in the TouchSensor node, enabled, which simply switches the sensor on or off. If off then it will not respond to any user actions. The events we are interested in here are the isActive, isOver and touchTime eventOuts. The event isOver outputs a boolean TRUE when the userís mouse is over an affected object and outputs a FALSE when it is not. Note that an event is only generated on the transition, not continuously. So, if the user moves his mouse over a sensed object then a single TRUE event will be generated. The isActive event outputs a boolean TRUE when the user clicks the mouse button (i.e. selects) an affected object. FALSE is output when the mouse button is deselected or the mouse pointer moves off the sensed object. Finally, the touchTime event outputs the time (world time, not real time!) when an object was touched. Note this event is generated after you release the mouse, not when you first press it.

6.3.2. The TimeSensor node

The TimeSensor node is very useful as it acts as a clock within the virtual world. You can use the TimeSensor node to generate a regular series of events which you can route into a much bigger circuit. For example you can use a TimeSensor to drive an animation or make a light flash.

The definition for the TimeSensor node is as follows:

  exposedField SFTime   cycleInterval    1
  exposedField SFBool   enabled          TRUE
  exposedField SFBool   loop             FALSE
  exposedField SFTime   startTime        0
  exposedField SFTime   stopTime         0
  eventOut     SFTime   cycleTime
  eventOut     SFFloat  fraction_changed
  eventOut     SFBool   isActive
  eventOut     SFTime   time

The definition and operation of this node is quiet complex so you would be better off referring to the VRML97 specification to find out exactly how it works. The bits we are interested in here are the cycleInterval, startTime and fraction_changed. When activated and running, the TimeSensor node continuously outputs events on fraction_changed. The value of the event output on fraction_changed is a floating point number between 0.0 and 1.0. A 0.0 indicates the start of a new cycle and a 1.0 indicates the end of one, values are output continuously throughout a cycle. The cycleInterval field specifies in seconds how long each cycle will last. Finally, the startTime specifies when the TimeSensor should start generating outputs. When the world time is equal to or greater than the specified startTime then the TimeSensor will become active and start generating events. A common use of this is to route the touchTime eventOut from a TouchSensor to the startTime eventIn of a TimeSensor. The TimeSensor will then start generating events once the TouchSensor is activated.

6.3.3. The OrientationInterpolator node

As previously mentioned, the OrientationInterpolator can be used to specify key times and values to provide smooth animation paths. We have already seen the PositionInterpolator node and the OrientationInterpolator is exactly the same except it handles values of type SFRotation instead of SFVec3f. The definition is as follows:

  exposedField MFFloat    key             []
  exposedField MFRotation keyValue        []
  eventIn      SFFloat    set_fraction
  eventOut     SFRotation value_changed

We have already seen what each of these fields do, but as an example we will configure an OrientationInterpolator to make an object spin a full 180° about itís X-axis. For this to actually work we would have to route a TimeSensor to the set_fraction eventIn and route the value_changed eventOut to the set_rotation eventIn of a Transform node containing the object. However, the node would be configured as follows :

     0.0,     # start of animation
     1.0      # end of animation

     1 0 0  0.0,        # 0 degrees
     1 0 0  3.1416      # to 180 degrees (radians)

Notice that we only need specify two keyframes. Seeing as we are performing a smooth rotation we only need specify the start and end frames.

6.3.4. The Transform node

We already know about the transform node and how it works, but what we are interested in here is changing the rotation field dynamically, using values from the OrientationInterpolator. If we look at the Transform node definition in the VRML97 specification, we will see that the rotation field is actually an exposedField. This means it has the two implied events, set_rotation and rotation_changed. So, to set the rotation of this transform co-ordinate system we simply route the output of our OrientationInterpolator to the set_rotation eventIn.

6.3.5. Putting it all together

Now we have seen how each node works and which events we will need to use, we can begin wiring explicit events together. If we now improve our routing diagram shown in figure 2 to show the event names we can begin to see more clearly how this works. The new diagram is shown in figure 3 below.

Figure 3. A more detailed analysis of our animation circuit.

Now, we will code up this example in VRML. We will use a yellow cylinder as our test object. Remember that we must name every node which we wish to route to. The code is as follows :

#VRML V2.0 utf8

# First define a transform which will create a new co-ordinate
# system to hold our test object
DEF Xform Transform
  rotation 0 0 0   0.0
      appearance Appearance
        material Material
          # bright yellow
          diffuseColor 1.0 1.0 0.0

      # Just a default size cylinder
      geometry Cylinder { }

    # Because we have included the TouchSensor as a sibling
    # of the Shape node, it will sense actions on our cone.
    DEF Touch TouchSensor { }

DEF Clock TimeSensor
  # 2 second cycle
  cycleInterval 2.0

  # we only want it to spin once
  loop FALSE

DEF Interpolator OrientationInterpolator
  # The key times, start and end of animation

  # The key values, 0 degrees and 180 degrees about x-axis
    1 0 0   0.0,
    1 0 0   3.1416

# Now the Routes which wire up our circuit.

ROUTE Touch.touchTime TO Clock.set_startTime
ROUTE Clock.fraction_changed TO Interpolator.set_fraction
ROUTE Interpolator.value_changed TO Xform.set_rotation

The result of this code is a simple yellow cylinder which is static upon loading the VRML world. If we click on the cylinder, upon releasing the mouse button it will rotate 180° about its X-axis then stop, taking approximately 2 seconds to do so. Figures 4(a) to 4(d) show some stages of this animation in process.

Figure 4(a).
Initial position.

Figure 4(b).
During rotation.

Figure 4(c).
Further round.

Figure 4(d).
Final position.

6.4. Summary

Although this course has been short, hopefully it will have given you some insights into the area of virtual reality and 3D graphics. We really havenít even scraped the surface, 3D graphics is such an enormous field that it would be impossible to cover it any great detail in a lecture course. New technologies and techniques are coming out extremely quickly and it is very hard to keep up. The best way to learn about fields such as 3D graphics is to learn it yourself. Find some facet (pun intended) of the subject you are interested in and work through any books or online resources you can find. In some ways things are a lot easier now as 3D graphics and VR are such popular subjects that there are thousands of good sites on the Internet which are dedicated to resources, discussing and teaching various aspects of 3D graphics.

Thank you all for paying an interest.


This page is maintained by Peter Young, please send any comments to (peter.young@durham.ac.uk).

Last updated: Tuesday 24 February, 1998. Mmmmmm, Pancakes!