Centre for Software Maintenance
University of Durham
Ten years is an incredibly long time within the field of computing, being able to accurately predict future trends and technologies is not an easy task. The broad range of specialist topics which constitute computer science will no doubt be subject to large degrees of change. Some areas will gain popularity or produce innovative ideas and flourish, often spawning completely new 'communities' of research, whereas others may well stagnate. One such area which is beginning to gain popularity is Software Visualisation.
Software visualisation (SV) is not a new area of research, classic forms of SV include flowcharts, entity-relationship diagrams and pretty-printed code. The research into SV has obviously changed greatly with the advent of good quality computer graphics, the emphasis now being on customisable, interactive and animated visualisations with a high degree of automation. SV covers visualising a wide range of features of a software system such as control flow, data storage and flow, software structure, system structure, algorithms and object relationships typically with the goal of aiding understanding.
The majority of SV systems currently rely on producing 2-dimensional illustrations and animations of the various features of a software system. The widespread availability of more powerful computers has brought about additional research into the use of 3-dimensional visualisations, though these have concentrated greatly on algorithm animations using the additional dimension primarily as a 'history' of the animation. The research currently in hand is to investigate the use of 3D graphics and virtual environments to abstractly model software systems, or to augment existing visualisations constructively.
As technology advances into the twenty-first century we will see the face of computer science changing. One of the most prominent advances may well be the softening of the human-computer interface. Ever since computers were first created we have been striving to improve upon the man-machine interface, to make it less obtrusive and more intuitive. At present we stare into a glowing box while hammering away at a bank of switches, this is not a very 'natural' interface.
Possibly the ultimate reduction of the human-computer interface could be as described by William Gibson in his short story Burning Chrome and expanded upon in his book Neuromancer. Gibson depicts the future of computing as users 'jacking-in' to computer systems facilitated by neural implants and nerve-splicing. The users then have the experience of being totally submerged within a computer generated virtual environment, described as the Matrix. An interface such as this would completely redefine the way we work with computers and each other.
"The matrix is an abstract representation of the relationships between data systems. Programmers jack into their employer's section of the matrix and find themselves surrounded by bright geometries representing the corporate data. Towers and fields of it ranged in the colourless non-space of the simulation matrix, the electronic consensus-hallucination that facilitates the handling and exchange of massive quantities of data."
Burning Chrome, William Gibson
The lure of producing such an environment is not entirely fictional or far-fetched. It is easy to see that humans crave 3-dimensional environments within which to interact as this provides an instantly intuitive interface and surroundings, not to mention that our visual senses are accustomed to stereoscopic vision and depth perception. A good example of the instant appeal of 3D interaction is the success of the games company, ID Software. ID has revolutionised the games playing arena in the last four years with their games Wolfenstein, Doom, and the forthcoming Quake each of which has improved upon the illusion of being immersed within a computer world.
Although the interface presented by Gibson could not be realised within the foreseeable future, if at all, we can already begin experimenting with possible virtual environments to support such immersion. Current virtual reality technology allows us to immerse the user within a virtual environment and allow a reasonable level of interaction, the problem now is determining the nature of the environment and the objects within it.
Gibson mentions the appearance or form of the data structures within the matrix in minimal detail, possibly for two reasons. Allowing the readers to form their own interpretation of the environment would probably be more meaningful to them and make a better read, also Gibson is not a computer scientist and probably has little knowledge of the complexity or intricacies of software. Inadvertently or otherwise he has hit the proverbial nail on the head, how do you visualise a software system?
To be able to create new software or maintain existing software within the matrix would require the effective visualisation of abstract software entities or objects and allow complex interaction and manipulation of these objects. Software currently contains too much low level detail to effectively visualise and allow meaningful manipulation and alteration within a simulated environment. This is most evident in visual programming systems, the programmer can create the general structure of the program using the visual techniques but is normally required to enter the majority of the underlying code through a traditional textual interface. The low level operation of a program is generally too complicated to abstract into a diagram or set of diagrams.
This problem is definitely true for current programming languages and software systems, however future programming methods may well address this problem. I would argue that object orientated systems are definitely here to stay, these will bring with them a slight lessening of the visualisation problem. Objects are themselves an abstraction of a piece of software and should therefore tend more easily to a visual abstraction. On a larger scale, software components and object oriented frameworks will attempt to standardise the interface between software elements which should provide a mechanism for higher level manipulation of software. It may even be possible to employ some form of AI system which performs the low level interfacing automatically, effectively stitching together the higher level objects. Unfortunately the problem will still remain that to produce anything new we will have to create a new set of objects and interfaces, do we then return to the traditional methods?
A software engineering environment such as this would also redefine the way in which a team of software engineers would tackle a particular system. If the team were all present within a shared virtual environment and were able to communicate with each other while working on some software it may open a door to a rather strange working environment. If we draw an analogy from the virtual software 'building site' to the more familiar building site we can see possible calls of 'Oi Bob! Pass me the quicksort routine'. It may also be possible to 'see' potential problems at an earlier stage by observing the work of other team members. For example when building a house, two builders could see at an early stage that the two walls they are building will not meet up. In the same way, the software engineer may be able to see that the software entity being constructed by another team member will not interface correctly with his own.
Progress within the next ten years will not come close to addressing the problems given above, though research in the area of SV will definitely flourish. The following are just some of the related topics which I believe will have a bearing on SV research.
I believe that the current work on Algorithm Visualisation will improve upon the level of automation and sophistication currently available. At present a great deal of systems require the user to augment the program under study with various 'probes' which report appropriate information to the visualiser either post-mortem or during run-time. The user must therefore already have some understanding of the system to know where to place the probes and which information to extract, somewhat defeating their use as a program comprehension aid (most are used in teaching and demonstrating algorithms). The current systems are also rather limited as to what they can visualise, dealing mainly with visualisations of stacks, queues, sorting routines and other well documented algorithms and data structures. I believe we will see an improvement in the generality of the visualisers for both 'confined' algorithms such as a function sorting local data, and also for more global operations which can visualise the manipulation and 'movement' of data within a program as a whole.
There is also a great deal of current work investigating the use of virtual environments in CSCW (Computer Supported Co-operative Working). The work already done in this field projects the users into a shared virtual environment in which they can see and interact with other users. I believe that this area of research will gain popularity very rapidly and we should see some notable advances within the next ten years, particularly in how the virtual users can work co-operatively to achieve a goal.
I would also hope to see a great improvement in research into visualisation of a program as a whole, with the emphasis on aiding program comprehension. This will require a range of various factors to be considered, such as the representation of program objects, how to aid the human learning process through these visualisation, how to abstractly model the software, and how to make this a significant aid to software maintenance.
I foresee the following advances in Software Visualisation within possibly the next ten years:
This page is maintained by Peter Young, please send any comments, jokes, insults or general abuse to (firstname.lastname@example.org).
Last updated: Thursday 1 February, 1996.