Number of co-authors:56
Number of publications with 3 favourite co-authors:Kieron O'Hara:5Nicholas Gibbins:4Srinandan Dasmahapatra:3
Nigel Shadbolt's 3 most productive colleagues in number of publications:Wendy Hall:58David E. Millard:37Mark J. Weal:24
... there are no simple 'right' answers for most web design questions (at least not for the important ones). What works is good, integrated design that fills a need--carefully thought out, well executed, and tested.
-- Steve Krug, Don't Make Me Think, p. 136
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
Has also published under the name of:
"Nigel R. Shadbolt"
Personal Homepage: http://users.ecs.soton.ac.uk/nrs/
Publications by Nigel Shadbolt (bibliography)
Popov, Igor, schraefel, m.c., Hall, Wendy and Shadbolt, Nigel (2012): mashpoint: browsing the web along structured lines. In: Adjunct Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 81-82.
Large numbers of Web sites support rich data-centric features to explore and interact with data. In this paper we present mashpoint, a framework that allows distributed data-powered Web applications to linked based on similarities of the entities in their data. By linking applications in this way we allow browsing with selections of data from one application to another application. This sort of browsing allows complex queries and exploration of data to be done by average Web users using multiple applications. We additionally use this concept to surface structured information to users in Web pages. In this paper we present this concept and our initial prototype.
© All rights reserved Popov et al. and/or ACM Press
Yeung, Ching-man Au, Gibbins, Nicholas and Shadbolt, Nigel (2009): Contextualising tags in collaborative tagging systems. In: Proceedings of the 20th ACM Conference on Hypertext and Hypermedia 2009. pp. 251-260.
Collaborative tagging systems are now popular tools for organising and sharing information on the Web. While collaborative tagging offers many advantages over the use of controlled vocabularies, they also suffer from problems such as the existence of polysemous tags. We investigate how the different contexts in which individual tags are used can be revealed automatically without consulting any external resources. We consider several different network representations of tags and documents, and apply a graph clustering algorithm on these networks to obtain groups of tags or documents corresponding to the different meanings of an ambiguous tag. Our experiments show that networks which explicitly take the social context into account are more likely to give a better picture of the semantics of a tag.
© All rights reserved Yeung et al. and/or their publisher
Noll, Michael G., Yeung, Ching-man Au, Gibbins, Nicholas, Meinel, Christoph and Shadbolt, Nigel (2009): Telling experts from spammers: expertise ranking in folksonomies. In: Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2009. pp. 612-619.
With a suitable algorithm for ranking the expertise of a user in a collaborative tagging system, we will be able to identify experts and discover useful and relevant resources through them. We propose that the level of expertise of a user with respect to a particular topic is mainly determined by two factors. Firstly, an expert should possess a high quality collection of resources, while the quality of a Web resource depends on the expertise of the users who have assigned tags to it. Secondly, an expert should be one who tends to identify interesting or useful resources before other users do. We propose a graph-based algorithm, SPEAR (SPamming-resistant Expertise Analysis and Ranking), which implements these ideas for ranking users in a folksonomy. We evaluate our method with experiments on data sets collected from Delicious.com comprising over 71,000 Web documents, 0.5 million users and 2 million shared bookmarks. We also show that the algorithm is more resistant to spammers than other methods such as the original HITS algorithm and simple statistical measures.
© All rights reserved Noll et al. and/or their publisher
Dupplaw, David, Dasmahapatra, Srinandan, Hu, Bo, Lewis, Paul and Shadbolt, Nigel (2009): A distributed, service-based framework for knowledge applications with multimedia. In ACM Transactions on Information Systems, 27 (4) p. 22.
The current trend in distributed systems is towards service-based integration. This article describes an ontology-driven framework implemented to provide knowledge management for data of different modalities, with multimedia processing, annotation, and reasoning provided by remote services. The framework was developed in, and is presented in the context of, the Medical Imaging and Advanced Knowledge Technologies (MIAKT) project that sought to support the Multidisciplinary Meetings (MDMs) that take place during breast cancer screening for diagnosing the patient. However, the architecture is entirely independent of the specific application domain and can be quickly prototyped into new domains. An Enterprise server provides resource access to a client-side presentation application which, in turn, provides knowledge visualization and markup of any supported media, as defined by a domain-dependent ontology-supported language.
© All rights reserved Dupplaw et al. and/or ACM Press
Yeung, Ching-man Au, Gibbins, Nicholas and Shadbolt, Nigel (2008): A k-Nearest-Neighbour Method for Classifying Web Search Results with Data in Folksonomies. In: 2008 IEEE / WIC / ACM International Conference on Web Intelligence WI 2008 9-12 December, 2008, Sydney, NSW, Australia. pp. 70-76.
Shadbolt, Nigel (2008): The Emergence of Web Science. In: 2008 IEEE / WIC / ACM International Conference on Web Intelligence WI 2008 9-12 December, 2008, Sydney, NSW, Australia. p. 19.
Hendler, James A., Shadbolt, Nigel, Hall, Wendy, Berners-Lee, Tim and Weitzner, Daniel J. (2008): Web science: an interdisciplinary approach to understanding the web. In Communications of the ACM, 51 (7) pp. 60-69.
Wong, Sylvia C., Crowder, Richard M., Wills, Gary B. and Shadbolt, Nigel (2007): Lesson learnt from a large-scale industrial semantic web application. In: Proceedings of the Eighteenth ACM Conference on Hypertext and Hypermedia 2007. pp. 21-30.
The design and maintenance of an aero-engine generates a significant amount of documentation. When designing new engines, engineers must obtain knowledge gained from maintenance of existing engines to identify possible areas of concern. We developed a Semantic Web based document repository for transferring front-line maintenance knowledge to design. The Semantic Web is an ideal candidate for this application because of the size and distributed nature of an aerospace manufacturer's operation. The Semantic Web allows us to dynamically cross reference documents with the use of an ontology. However, during the design and implementation of this project, we found deficiencies in the W3C recommended Semantic Web query language SPARQL. It is difficult to answer questions our users sought from the document repository using SPARQL. The problem is that SPARQL is designed for handling textual queries. In industrial applications, many common textual and semantic questions also contain a numerical element, be it data summarization or arithmetic operations. In this paper, we generalize the problems we found with SPARQL, and extend it to cover web applications in non-aerospace domains. Based on this analysis, we recommend that SQL-styled grouping, aggregation and variable operations be added to SPARQL, as they are necessary for industrial applications of the Semantic Web. At the moment, to answer the non-textual questions we identified with an RDF store, custom written software is needed to process the results returned by SPARQL. We incorporated the suggested numerical functionalities from SQL for an example query, and achieved a 21.7% improvement to the speed of execution. More importantly, we eliminate the need of extra processing in software, and thus make it easier and quicker to develop Semantic Web applications.
© All rights reserved Wong et al. and/or ACM Press
Weal, Mark J., Alani, Harith, Kim, Sanghee, Lewis, Paul H., Millard, David E., Sinclair, Patrick A. S., Roure, David C. De and Shadbolt, Nigel (2007): Ontologies as facilitators for repurposing web documents. In International Journal of Human-Computer Studies, 65 (6) pp. 537-562.
This paper investigates the role of ontologies as a central part of an architecture to repurpose existing material from the web. A prototype system called ArtEquAKT is presented, which combines information extraction, knowledge management and consolidation techniques and adaptive document generation. All of these components are co-ordinated using one central ontology, providing a common vocabulary for describing the information fragments as they are processed. Each of the components of the architecture is described in detail and an evaluation of the system discussed. Conclusions are drawn as to the effectiveness of such an approach and further challenges are outlined.
© All rights reserved Weal et al. and/or Academic Press
Hu, Bo, Dasmahapatra, Srinandan, Dupplaw, David, Lewis, Paul and Shadbolt, Nigel (2007): Reflections on a medical ontology. In International Journal of Human-Computer Studies, 65 (7) pp. 569-582.
In this paper we confront the divide between the ontologies developed from the requirement of comprehensive and general domain coverage and those devised to meet application-specific requirements. While the generalists typically attach philosophical sophistication to their approach, in supposed contrast to the narrow remit chosen by the application-bound knowledge engineers, we would like to indicate that the latter practice can often reflect a multi-faceted rationale, nuanced by the requirements of the domain. We demonstrate how the necessity of placing ontology-based systems with the work-practices of domain experts introduces unique demands on design rationales and enforces, often implicitly, a philosophical assessment of the necessary concepts and relations that balance the generality and specificity. Such demands are not addressed by generic approaches to modelling the reality of a domain. Indeed, we articulate the philosophical and practical considerations that we have taken into account when developing an application-specific ontology. We would certainly hope that our experiences can be of help to the development of ontologies in similar applications.
© All rights reserved Hu et al. and/or Academic Press
Alani, Harith, Noy, Natasha Fridman, Shah, Nigam, Shadbolt, Nigel and Musen, Mark A. (2007): Searching ontologies based on content: experiments in the biomedical domain. In: Sleeman, Derek H. and Barker, Ken (eds.) K-CAP 2007 - Proceedings of the 4th International Conference on Knowledge Capture October 28-31, 2007, Whistler, BC, Canada. pp. 55-62.
Hu, Bo, Croitoru, Madalina, Dasmahapatra, Srinandan, Lewis, Paul H. and Shadbolt, Nigel (2007): Indexing ontologies with semantics-enhanced keywords. In: Sleeman, Derek H. and Barker, Ken (eds.) K-CAP 2007 - Proceedings of the 4th International Conference on Knowledge Capture October 28-31, 2007, Whistler, BC, Canada. pp. 119-126.
McRae-Spencer, Duncan M. and Shadbolt, Nigel (2006): Also by the same author: AKTiveAuthor, a citation graph approach to name disambiguation. In: JCDL06: Proceedings of the 6th ACM/IEEE-CS Joint Conference on Digital Libraries 2006. pp. 53-54.
The desire for definitive data and the semantic web drive for inference over heterogeneous data sources requires co-reference resolution to be performed on those data. In particular, name disambiguation is required to allow accurate publication lists, citation counts and impact measures to be determined. This paper describes a graph-based approach to author disambiguation on large-scale citation networks. Using self-citation, co-authorship and document source analyses, AKTiveAuthor clusters papers, achieving precision of 0.997 and recall of 0.818 over a test group of eight surname clusters.
© All rights reserved McRae-Spencer and Shadbolt and/or ACM Press
Luo, Xudong, Jennings, Nicholas R. and Shadbolt, Nigel (2006): Acquiring user tradeoff strategies and preferences for negotiating agents: A default-then-adjust method. In International Journal of Human-Computer Studies, 64 (4) pp. 304-321.
A wide range of algorithms have been developed for various types of negotiating agents. In developing such algorithms the main focus has been on their efficiency and their effectiveness. However, this is only a part of the picture. Typically, agents negotiate on behalf of their owners and for this to be effective the agents must be able to adequately represent their owners' strategies and preferences for negotiation. However, the process by which such knowledge is acquired is typically left unspecified. To address this problem, we undertook a study of how user information about negotiation tradeoff strategies and preferences can be captured. Specifically, we devised a novel default-then-adjust acquisition technique. In this, the system firstly does a structured interview with the user to suggest the attributes that the tradeoff could be made between, then it asks the user to adjust the suggested default tradeoff strategy by improving some attribute to see how much worse the attribute being traded off can be made while still being acceptable, and, finally, it asks the user to adjust the default preference on the tradeoff alternatives. This method is consistent with the principles of standard negotiation theory and to demonstrate its effectiveness we implemented a prototype system and performed an empirical evaluation in an accommodation renting scenario. The result of this evaluation indicates the proposed technique is helpful and efficient in accurately acquiring the users' tradeoff strategies and preferences.
© All rights reserved Luo et al. and/or Academic Press
Milton, Nick, Clarke, David and Shadbolt, Nigel (2006): Knowledge engineering and psychology: Towards a closer relationship. In International Journal of Human-Computer Studies, 64 (12) pp. 1214-1229.
Knowledge engineering projects deal with a wide range of domains within organizational and academic contexts. A number of elicitation techniques are used to acquire knowledge from experts. Most of these techniques originated within psychology but have been developed by knowledge engineers to become more structured, efficient and systematic. Until now, nobody has tried to re-apply these modified techniques back into psychology. This paper describes work that addresses this matter. It focuses on the psychological knowledge possessed by all people that enables them to deal with everyday problems and make life decisions. We refer to this as 'personal knowledge'. To take a knowledge engineering approach to personal knowledge, we investigated the use of knowledge elicitation techniques to capture personal knowledge. We describe an empirical study involving ten participants and 80 knowledge acquisition sessions that assessed eight elicitation techniques in this context. The results revealed that each of the techniques showed promise at efficiently capturing and structuring aspects of an individual's personal knowledge. A content analysis of the acquired knowledge led to the construction of a meta-model (a primitive ontology) of personal knowledge and to the design for a new methodology for psychological research. From the perspective of psychology, the paper shows that knowledge engineering methods can be of value to psychologists. From the perspective of knowledge engineering and the wider computer science community, the paper shows that empirical methods used by psychologists can benefit the development and evaluation of ontologies and elicitation techniques.
© All rights reserved Milton et al. and/or Academic Press
Shadbolt, Nigel, Berners-Lee, Tim, Hendler, Jim, Hart, Claire and Benjamins, Richard (2006): The next wave of the web. In: Proceedings of the 2006 International Conference on the World Wide Web 2006. p. 750.
The World Wide Web has been revolutionary in terms of impact, scale and outreach. At every level society has been changed in some way by the Web. This Panel will consider likely developments in this extraordinary human construct as we attempt to realise the Next Wave of the Web -- a Semantic Web. Nigel Shadbolt will Chair a discussion that will focus on the prospects for the Semantic Web, its likely form and the challenges it faces. Can we achieve the necessary agreements on shared meaning for the Semantic Web? Can we achieve a critical mass of semantically annotated data and content? How are we to trust such content? Do the scientific and commercial drivers really demand a Semantic Web? How will the move to a mobile and ubiquitous Web affect the Semantic Web? How does Web 2.0 relate to the Semantic Web?
© All rights reserved Shadbolt et al. and/or ACM Press
Middleton, Stuart E., Shadbolt, Nigel and Roure, David C. De (2004): Ontological user profiling in recommender systems. In ACM Transactions on Information Systems, 22 (1) pp. 54-88.
We explore a novel ontological approach to user profiling within recommender systems, working on the problem of recommending on-line academic research papers. Our two experimental systems, Quickstep and Foxtrot, create user profiles from unobtrusively monitored behaviour and relevance feedback, representing the profiles in terms of a research paper topic ontology. A novel profile visualization approach is taken to acquire profile feedback. Research papers are classified using ontological classes and collaborative recommendation algorithms used to recommend papers seen by similar people on their current topics of interest. Two small-scale experiments, with 24 subjects over 3 months, and a large-scale experiment, with 260 subjects over an academic year, are conducted to evaluate different aspects of our approach. Ontological inference is shown to improve user profiling, external ontological knowledge used to successfully bootstrap a recommender system and profile visualization employed to improve profiling accuracy. The overall performance of our ontological recommender systems are also presented and favourably compared to other systems in the literature.
© All rights reserved Middleton et al. and/or ACM Press
Castro-Schez, Jose J., Jennings, Nicholas R., Luo, Xudong and Shadbolt, Nigel (2004): Acquiring domain knowledge for negotiating agents: a case of study. In International Journal of Human-Computer Studies, 61 (1) pp. 3-31.
In this paper, we employ the fuzzy repertory table technique to acquire the necessary domain knowledge for software agents to act as sellers and buyers using a bilateral, multi-issue negotiation model that can achieve optimal results in semi-competitive environments. In this context, the seller's domain knowledge that needs to be acquired is the rewards associated with the products and restrictions attached to their purchase. The buyer's domain knowledge that is acquired is their requirements and preferences on the desired products. The knowledge acquisition methods we develop involve constructing three fuzzy repertory tables and their associated distinctions matrixes. The first two are employed to acquire the seller agent's domain knowledge; and the third one is used, together with an inductive machine learning algorithm, to acquire the domain knowledge for the buyer agent.
© All rights reserved Castro-Schez et al. and/or Academic Press
Luo, Xudong, Jennings, Nicholas R. and Shadbolt, Nigel (2003): Knowledge-based acquisition of tradeoff preferences for negotiating agents. In: Sadeh, Norman M., Dively, Mary Jo, Kauffman, Robert J., Labrou, Yannis, Shehory, Onn, Telang, Rahul and Cranor, Lorrie Faith (eds.) Proceedings of the 5th International Conference on Electronic Commerce - ICEC 2003 September 30 - October 03, 2003, Pittsburgh, Pennsylvania, USA. pp. 138-149.
Shadbolt, Nigel, Burke, Murray A. and Friedland, Noah S. (2003): Panel discussion on knowledge acquisition projects (Halo, RKF, and AKT). In: Gennari, John H., Porter, Bruce W. and Gil, Yolanda (eds.) K-CAP 2003 - Proceedings of the 2nd International Conference on Knowledge Capture October 23-25, 2003, Sanibel Island, FL, USA. pp. 2-3.
Gibbins, Nicholas, Harris, Stephen and Shadbolt, Nigel (2003): Agent-based semantic web services. In: Proceedings of the 2003 International Conference on the World Wide Web 2003. pp. 710-717.
The Web Services world consists of loosely-coupled distributed systems which adapt to ad-hoc changes by the use of service descriptions that enable opportunistic service discovery. At present, these service descriptions are semantically impoverished, being concerned with describing the functional signature of the services rather than characterising their meaning. In the Semantic Web community, the DAML Services effort attempts to rectify this by providing a more expressive way of describing Web services using ontologies. However, this approach does not separate the domain-neutral communicative intent of a message (considered in terms of speech acts) from its domain-specific content, unlike similar developments from the multi-agent systems community. In this paper, we describe our experiences of designing and building an ontologically motivated Web Services system for situational awareness and information triage in a simulated humanitarian aid scenario. In particular, we discuss the merits of using techniques from the multi-agent systems community for separating the intentional force of messages from their content, and the implementation of these techniques within the DAML Services model.
© All rights reserved Gibbins et al. and/or ACM Press
Tennison, Jeni, O'Hara, Kieron and Shadbolt, Nigel (2002): APECKS: using and evaluating a tool for ontology construction with internal and external KA support. In International Journal of Human-Computer Studies, 56 (4) pp. 375-422.
This paper describes Adaptive Presentation Environment for Collaboration
Knowledge Structuring (APECKS), an experimental tool for collaborative ontology
construction. APECKS takes a different line to most ontology servers, in that
it is designed for use by domain experts, possibly in the absence of a
knowledge engineer, and its aim is to foster and support debate about domain
ontologies. To that end, it does not enforce ideals of consistency or
correctness, and instead allows different conceptualizations of a domain to
coexist. The system architecture and life cycle are introduced, and three
extensive scenarios are outlined, showing how APECKS supports ontology
construction, learning, ontology comparison and discussion. APECKS has also
been used by several subjects during an evaluation experiment, and the results
of this experiment are described. A particular factor about APECKS is that, as
well as providing internal KA support, it is designed to interface with
web-accessible KA tools, thereby allowing theoretically unlimited KA support
for users. The prototype used WebGrid-II as external KA support, and the issues
involved in integrating APECKS and WebGrid are discussed in detail.
© All rights reserved Tennison et al. and/or Academic Press
Crow, Louise and Shadbolt, Nigel (2001): Extracting Focused Knowledge from the Semantic Web. In International Journal of Human-Computer Studies, 54 (1) pp. 155-184.
Ontologies are increasingly being recognized as a critical component in making networked knowledge accessible. Software architectures which can assemble knowledge from networked sources coherently according to the requirements of a particular task or perspective will be at a premium in the next generation of web services. We argue that the ability to generate task-relevant ontologies efficiently and relate them to web resources will be essential for creating a machine-inferencable "semantic web". The Internet-based multi-agent problem solving (IMPS) architecture described here is designed to facilitate the retrieval, restructuring, integration and formalization of task-relevant ontological knowledge from the web. There are rich structured and semi-structured sources of knowledge available on the web that present implicit or explicit ontologies of domains. Knowledge-level models of tasks have an important role to play in extracting and structuring useful focused problem-solving knowledge from these web sources. IMPS uses a multi-agent architecture to combine these models with a selection of web knowledge extraction heuristics to provide clean syntactic integration of ontological knowledge from diverse sources and support a range of ontology merging operations at the semantic level. Whilst our specific aim is to enable on-line knowledge acquisition from web sources to support knowledge-based problem solving by a community of software agents encapsulating problem-solving inferences, the techniques described here can be applied to more general task-based integration of knowledge from diverse web sources, and the provision of services such as the critical comparison, fusion, maintenance and update of both formal informal ontologies.
© All rights reserved Crow and Shadbolt and/or Academic Press
Cheng, Peter C.-H., Cupit, James and Shadbolt, Nigel (2001): Supporting Diagrammatic Knowledge Acquisition: An Ontological Analysis of Cartesian Graphs. In International Journal of Human-Computer Studies, 54 (4) pp. 457-494.
Cartesian graphs constitute an important class of knowledge representation devices. As part of a project on diagrammatic knowledge acquisition we have formulated principles that can underpin the construction, interpretation and use of Cartesian graphs in general and in the specific context of knowledge acquisition. Cartesian graphs are distinguished from other forms of representations by the manner in which they use two-dimensional space to encode quantities on interval or ratio scales. An ontological approach to the analysis of graphs was adopted in which a framework for mapping between the EngMath ontology for engineering mathematics and an ontology of visual components of graphs was developed, the GraphRep framework. GraphRep considers the roles of physical dimensions, measurement units, scales of measurement, functional relations amongst quantities and magnitudes in the generation and interpretation of graphs. It provides a topology of standard graphs and rules for the construction of composite graphs. The utility of the framework is demonstrated by using it: (1) to explain why a particular type of complex composite graph is often used for problem solving in thermodynamics; (2) to analyse the limitations of existing software packages for visualizing data, such as spreadsheets, and to suggest the improvements in their design; and (3) to provide constraints and guidelines for the design of procedures and software to support diagrammatic knowledge acquisition with Cartesian graphs.
© All rights reserved Cheng et al. and/or Academic Press
Middleton, Stuart E., Roure, David De and Shadbolt, Nigel (2001): Capturing knowledge of user preferences: ontologies in recommender systems. In: K-CAP 2001 - Proceedings of the First International Conference on Knowledge Capture October 21-23, 2001, Victoria, BC, Canada. pp. 100-107.
Milton, Nick, Shadbolt, Nigel, Cottam, Hugh and Hammersley, Mark (1999): Towards a Knowledge Technology for Knowledge Management. In International Journal of Human-Computer Studies, 51 (3) pp. 615-641.
Knowledge Management (KM) is crucial to organizational survival, yet is a difficult task requiring large expenditure of resources. Information Technology solutions, such as email, document management and intranets, are proving very useful in certain areas. However, many important problems still exist, providing opportunities for new techniques and tools more oriented towards knowledge. We refer to this as Knowledge Technology. A framework has been developed which has allowed opportunities for Knowledge Technology to be identified in support of five key KM activities: personalization, creation/innovation, codification, discovery and capture/monitor. In developing Knowledge Technology for these areas, methods from knowledge engineering are being explored. Our main work in this area has involved the application and evaluation of existing knowledge for a large intranet system. This, and other case studies, have provided important lessons and insights which have led to ongoing research in ontologies, generic models and process modelling methods. We believe that the evidence presented here shows that knowledge engineering has much to offer KM and can be the basis on which to move towards a Knowledge Technology.
© All rights reserved Milton et al. and/or Academic Press
Shadbolt, Nigel, O'Hara, Kieron and Crow, Louise (1999): The Experimental Evaluation of Knowledge Acquisition Techniques and Methods: History, Problems and New Directions. In International Journal of Human-Computer Studies, 51 (4) pp. 729-755.
The special problems of experimentally evaluating knowledge acquisition and knowledge engineering tools, techniques and methods are outlined, and illustrated in detail with reference to two series of studies. The first is a series of experiments undertaken at Nottingham University under the aegis of the UK Alvey initiative and the ESPRIT project ACKnowledge. The second is the series of Sisyphus benchmark studies. A suggested programme of experimental evaluation is outlined which is informed by the problems with using Sisyphus for evaluation.
© All rights reserved Shadbolt et al. and/or Academic Press
Benjamins, V. Richard and Shadbolt, Nigel (1998): Preface: Knowledge Acquisition for Planning. In International Journal of Human-Computer Studies, 48 (4) pp. 409-416.
Cottam, Hugh and Shadbolt, Nigel (1998): Knowledge Acquisition for Search and Rescue Planning. In International Journal of Human-Computer Studies, 48 (4) pp. 449-473.
There is an increasing adoption of knowledge-level modelling within expert system development. However, it has had less impact in the generic areas of planning, scheduling and resource allocation. In this paper, we outline the development of a knowledge-level modelling approach within the domain of planning for search and rescue (SAR). Existing problem solving models for planning are almost exclusively derived from an analysis of the functional architectures of classic AI planners such as TWEAK and NONLIN. We argue that this makes their suitability for directly assisting knowledge acquisition questionable. Our approach makes a clear distinction between domain-derived knowledge-level models and those derived from computational architectures. We describe how the combination of these two types of models can achieve clear benefits within the course of KBS development. The paper includes extensive descriptions of the SAR domain, which illustrate the practical knowledge engineering problems that our approach attempts to address.
© All rights reserved Cottam and Shadbolt and/or Academic Press
O'Hara, Kieron, Shadbolt, Nigel and Heijst, Gertjan van (1998): Generalized Directive Models: Integrating Model Development and Knowledge Acquisition. In International Journal of Human-Computer Studies, 49 (4) pp. 497-522.
The Generalized directive model (GDM) methodology for knowledge acquisition is introduced. For GDMs to work two assumptions are required: that knowledge acquisition has a cyclic structure interleaving episodes of model development and domain KA, and that increased specification of one part of a model does not affect other parts. The use of GDMs is illustrated with a real-world example from an Airborne Early Warning system, showing the development of a model for one sub-task using the PC-based GDM tool from the commercial workbench PC-PACK. There is also a small example of a GDM analysis extending an already existing model. Finally, GDMs are compared with the decompositional CommonKADS library.
© All rights reserved O'Hara et al. and/or Academic Press
Motta, Enrico, Stutt, Arthur, Zdrahal, Zdenek, O'Hara, Kieron and Shadbolt, Nigel (1996): Solving VT in VITAL: A Study in Model Construction and Knowledge Reuse. In International Journal of Human-Computer Studies, 44 (3) pp. 333-371.
In this paper we discuss a solution to the Sisyphus II elevator design problem developed using the VITAL approach to structured knowledge-based system development. In particular we illustrate in detail the process by which an initial model of Propose&Revise problem solving was constructed using a generative grammar of model fragments and then refined and operationalized in the VITAL operational conceptual modelling language (OCML). In the paper we also discuss in detail the properties of a particular Propose&Revise architecture, called "Complete-Model-then-Revise", and we show that it compares favourably in terms of competence with alternative Propose&Revise models. Moreover, using as an example the VT domain ontology provided as part of the Sisyphus II task, we critically examine the issues affecting the development of reusable ontologies. Finally, we discuss the performance of our problem solver and we show how we can use machine learning techniques to uncover additional strategic knowledge not present in the VT domain.
© All rights reserved Motta et al. and/or Academic Press
Shadbolt, Nigel, O'Hara, Kieron and Schreiber, Guus (1996): Advances in Knowledge Acquisition, Lecture Notes in Computer Science Series. Berlin, Germany,
Motta, Enrico, O'Hara, Kenton P. and Shadbolt, Nigel (1994): Grounding GDMs: A Structured Case Study. In International Journal of Human-Computer Studies, 40 (2) pp. 315-347.
In this paper a solution to the Sisyphus room allocation problem is discussed which uses the generalized directive model (GDM) methodology developed in the ACKnowledge project, together with the knowledge engineering methodology developed in the VITAL project. After briefly introducing these methodologies, the paper presents a "walkthrough" of the Sisyphus solution from analysis to implementation in detail, so that all acquisition, modelling and design decisions can be seen in context. The selection of a reusable off-the-shelf model from the GDM library is presented, together with a discussion of the ways in which this selection process can drive the knowledge acquisition process. Next, there is an account of the instantiation of the GDM and the imposition of a control regime over the dataflow structure; we show how this process uncovers hidden constraints and inconsistencies in Siggi's account of his own problem solving. The output of this KA phase consists of a conceptual model of the problem which is discussed in detail and formalized in terms of the VITAL conceptual modelling language. From this analysis of the problem, we move on to discussion of the issues concerning the design and implementation of a system, and we show how our implementation satisfies the specification of the Sisyphus problem.
© All rights reserved Motta et al. and/or Academic Press
Shadbolt, Nigel (1993): "Eliciting and Analyzing Expert Judgement, A Practical Guide," by M. Meyer and J. Booker. In International Journal of Man-Machine Studies, 38 (6) pp. 1049-1055.
Show list on your website
Join the technology elite and advance:
Changes to this page (author)23 Nov 2012: Modified19 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
18 Aug 2009: Modified
12 Jul 2009: Added
09 Jul 2009: Modified
09 Jul 2009: Modified
03 Jun 2009: Modified
03 Jun 2009: Modified
03 Jun 2009: Modified
03 Jun 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
30 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
27 Jun 2007: Modified
27 Jun 2007: Modified
27 Jun 2007: Modified
24 Jun 2007: Modified
23 Jun 2007: Added
28 Apr 2003: Added
Page maintainer: The Editorial Team