Publication statistics

Pub. period:1993-2006
Pub. count:5
Number of co-authors:9



Co-authors

Number of publications with 3 favourite co-authors:

Craig McKenzie:1
Jeff Z. Pan:1
Peter M. D. Gray:1

 

 

Productive colleagues

Alun D. Preece's 3 most productive colleagues in number of publications:

Laurence Vignollet:10
T. Radhakrishnan:7
Robert Plant:6
 
 
 

Upcoming Courses

go to course
UI Design Patterns for Successful Software
88% booked. Starts in 7 days
go to course
Affordances: Designing Intuitive User Interfaces
87% booked. Starts in 8 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Alun D. Preece

 

Publications by Alun D. Preece (bibliography)

 what's this?
2006
 
Edit | Del

Preece, Alun D., Chalmers, Stuart, McKenzie, Craig, Pan, Jeff Z. and Gray, Peter M. D. (2006): A semantic web approach to handling soft constraints in virtual organisations. In: Fox, Mark S. and Spencer, Bruce (eds.) Proceedings of the 8th International Conference on Electronic Commerce - ICEC 2006 2006, Fredericton, New Brunswick, Canada. pp. 151-161. Available online

1997
 
Edit | Del

Preece, Alun D., Talbot, Stephane and Vignollet, Laurence (1997): Evaluation of Verification Tools for Knowledge-Based Systems. In International Journal of Human-Computer Studies, 47 (5) pp. 629-658.

Validation has emerged as a significant problem in the development of knowledge based systems (KBS). Verification of KBS correctness and completeness has been cited as one of the most difficult aspects of validation. A number of software tools have been developed to perform such verification, but none of these are in widespread use. One of the reasons for this is that little quantitative evidence exists to demonstrate the effectiveness of the tools. This paper presents an experimental study of three KBS verification tools: a consistency checker, a completeness checker and a testing tool (for correctness). The tools are evaluated on their ability to reveal plausible faults seeded into a complex, realistic KBS application. The cost of using the tools is also measured. It is shown that each tool is independently effective at detecting certain kinds of fault and that the capabilities of the tools are complementary -- a result not revealed by previous studies.

© All rights reserved Preece et al. and/or Academic Press

1996
 
Edit | Del

Plant, Robert and Preece, Alun D. (1996): Editorial. In International Journal of Human-Computer Studies, 44 (2) pp. 123-125.

As for any software, users of knowledge-based systems (KBS) need to know that they can rely on the system to do its job properly. Assuring the reliability of knowledge-based systems has become an important issue in the development of the knowledge engineering discipline. The processes employed directly to assure the reliability of software are called verification and validation (V&V). Roughly speaking, validation is the process of determining if a KBS meets its users' requirements; verification is the process of determining if a KBS has been constructed to comply with certain formally-specified properties, such as consistency and irredundancy. Implicitly, validation includes verification. Verification and validation techniques for KBS have been discussed and debated in workshops at many of the predominant artificial intelligence conferences in recent years. The purpose of this special issue is to provide "snapshots" of the current state of the V&V area for KBS, by collecting together representative works from three of the most recent workshops: * at IJCAI-93 in Chambery, France (Chairman: Marc Ayel, Universite de Savoie, France); * at AAAI-94 in Seattle, USA (Chairman: Robert Plant, co-editor of this issue); * at ECAI-94 in Amsterdam, The Netherlands (Chairman: Alun Preece, co-editor of this issue). These workshops succeeded in highlighting many of the significant issues and trends within their area of concern. These issues and trends are reflected in the articles selected for this issue, the authors of which have expanded and updated their original workshop papers. The purpose of this introduction is to highlight some of the issues and trends in KBS V&V, to put this collection in its context.

© All rights reserved Plant and Preece and/or Academic Press

 
Edit | Del

Preece, Alun D., Grossner, Cliff and Radhakrishnan, T. (1996): Validating Dynamic Properties of Rule-Based Systems. In International Journal of Human-Computer Studies, 44 (2) pp. 145-169.

Rule-based systems can be viewed as possessing two sets of properties: static and dynamic. Static properties are those that can be evaluated without executing the system, and dynamic properties can be evaluated only by examining how the system operates at run time. The dynamic properties of a rule-based system have been largely neglected in validation and verification work done thus far. Structural verification and static testing techniques do not yield information on how a rule-based system achieves its goals at run-time, given a set of input data. This paper presents a model for the relationship between the goal states achieved by a rule-based system, the set of inter-related rules that must fire to achieve each goal state, and the data items required for the rules in the rule sequence to fire. Then, we describe a method for applying this model to study the dynamic properties of a rule-based system. It is demonstrated that this model permits the validation of dynamic properties of a rule-based system, enabling system developers to decide: (1) if the manner in which the system pursues goals is valid according to the specifications (and expectations) of the designers; (2) what relationship exists between the quality of system output for a given test case and the goals achieved during problem-solving on that test case; and (3) how the overall problem-solving activity of the system relates to the availability of input data.

© All rights reserved Preece et al. and/or Academic Press

1993
 
Edit | Del

Preece, Alun D. (1993): A New Approach to Detecting Missing Knowledge in Expert System Rule Bases. In International Journal of Man-Machine Studies, 38 (4) pp. 661-688.

Two of the most important and difficult tasks in building expert systems are knowledge acquisition (KA) and quality assurance (QA). QA involves verification and validation (V&V) techniques, which often reveal errors that must be rectified through further KA. Traditionally, V&V is done by extensive testing of the expert system, but this is difficult, time-consuming, labour-intensive (requiring considerable human expert involvement) and unreliable (due to the large size of most expert system input domains). V&V can be assisted by using a verification tool to detect anomalies in the knowledge base of an expert system, such as redundancies, conflicts, circularities and deficiencies (missing knowledge). Such anomalies are usually indicative of errors in the expert system. Deficiencies are important for QA and KA: they indicate incomplete portions of the knowledge base which should either be specified as known domain bounds or completed via further KA. This paper describes the COVER deficiency detection tool which: minimizes human expert involvement in knowledge base verification; focuses the search for meaningful deficiencies; integrates closely with checks for redundancy, conflicts and circularity; maximizes user-control over deficiency detection; and overcomes the combinatorial explosion traditionally associated with the deficiency check. The paper describes how COVER uses heuristics about the nature of likely deficiencies to improve its performance and clarify reporting of deficiencies to the user. COVER performance is analysed in detail, both theoretically and on real-world expert system knowledge bases.

© All rights reserved Preece and/or Academic Press

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/alun_d__preece.html