TTF Projects

From Wsmx-WIKI
Jump to: navigation, search
Navigation (Technical Task Force): Technical Task Force | Members | Meetings

This is a list of projects identified by the chairs of the TTF working groups, if your interested in getting involved in these projects please contact the relevant contact person.

SWS Discovery for Composition

Description: SWS Composition is a process that automatically arranges several existing SWS into a composed SWS that fulfills a complex user requirement. Composition needs to interface to Discovery in finding the "existing" services from which to compose. The approach to be followed in this project is that, in a pre-process to Composition, Discovery finds a set of SWS that is "relevant" to the given user requirement; Composition then draws its SWS from that set. "Relevance" here goes beyond the usual notions of matching SWS against user requirements: usually, a SWS matches the requirement if it is suitable to implement it; by contrast, in our setting, a SWS must be considered relevant for the requirement if it can form some part of a complex solution for it. Concretely, the task of the project is to implement a loop around standard Discovery, performing a simple forward-chaining style algorithm to iteratively find a set of relevant SWS.

Contact: james.scicluna@sti2.at

WSMO compliant processes to BPEL translator

Description: As a comprehensive and complete ontology for all Semantic Web Services related aspect, Web Service Modeling Ontology (WSMO) still has to prove its applicability with respect to the existing technologies and standards. One important aspect is the compatibility of the WSMO choreography model with the existing process modeling and execution standards. For this, a Business Process Modeling Ontology (BPMO) based on WSMO chorography, which in its turn is based on ASM methodology, is being developed. This task consists of building a prototype for BPMO to BPEL translation.

Contact: mick.kerrigan@sti2.at

Persistent Storage for WSMX Resource Manager

Description: The current version of the storage implementation provides in-memory storage of WSMO top level entities, and thus basically only runtime memory. A persistent storage facility is required to cope with the dynamic nature of future service architectures. Such facilities should allow the WSMO top level entities to be stored in an underlying database. In this way WSML files and objects could be reloaded and searched from the database upon start-up of WSMX. Different persistent storage facilities might be required by WSMX for the different types of entities: ontologies, service and goal descriptions, and mediators, but also WSDL-based Web service descriptions for traditional invocation.

Contact: federico.facca@sti2.at

Triplespace-Based Storage for Resource Manager

Description: Alternatively or complementary to providing WSMX with different persistent repositories for top entities, it is the goal to run WSMX components as triplespace users. Therefore it is necessary to further the transformation of the existing WSML data into RDF representation that match the data model of the communication and coordination middleware. The repository interfaces of the Resource Manager of WSMX must be linked to a triplespace. The storage of WSMO top level entities in Triplespace is expected to enhance and fasten the access processes to data items by filtering the relevant WSML files with help of RDF-based reasoning before applying full-ledge WSML reasoning algorithms.

Contact: federico.facca@sti2.at

Semantic Mediation Services

Description: The mediation services play a crucial role in WSMX. They encapsulate the actual technical solution needed in solving a given heterogeneity problem. At first level we have the mediation services which represent classical Web services that are able to offer mediation solutions for a specific class of heterogeneity problems (for example there could be a service able to perform ontology-based data mediation, in particular transformation of instances from the terms of the source ontology in the terms of the target ontology). The mediation services will be semantically described as Semantic Mediators using semantic technologies (i.e. WSMO). There are two mediation run-time tools developed as part of WSMX so far: ontology to ontology run-time mediation and process mediation. The task will consist of deploying these components as classical Web Services (first step) and semantically describing them as Semantic Mediator Services (second step). Some minor adjustments and updates to the run-time components may also be needed (for this, the full support of the people who developed the tools will be provided).

Contact: mick.kerrigan@sti2.at

Designing Semantic Service Description for Discovery

Description: Enabling efficient service discovery is the prerequisite for a dynamic service oriented architecture. Within DERI different approaches have been developed how semantic technologies can be used for this task. Within this work the focus lies on helping the user to create semantic descriptions that can be used in the process of discovery. The GUI that has to be developed shall allow the creation of goals and Web Services as well as their validation and testing.

Contact: nathalie.steinmetz@sti2.at

Parser - Implementation of a new parser for WSML

Description: Currently we use sablecc as parser generator. The new parser should above all be faster than the current one and provide fault handling.

Contact: barry.bishop@sti2.at

OWL/WSML conversion - Improve the OWL-DL compatibility of WSML

Description: Assure backward compatibility when converting WSML to OWL and vice-versa. Check the functionality of the OWL parser in wsmo4j and eventually implement a new transformation, ensuring the correct translation from OWL-DL to WSML-DL.

Contact: barry.bishop@sti2.at

WSML Validator - Extend the WSML Validator to use error codes

Description: Currently all results that come back from the validator are based on plain text strings. These can neither be localized nor changed by the resulting application without using '.equals()' or using regular expressions. Even if these solutions were to be employed, they would break as soon as a new message was added or an existing one changed. So instead of keeping the error description in the validator it should rely on some error codes that can be resolved using some resource bundle.

Contact: barry.bishop@sti2.at

Small WSMO4J revisions

Description: This topic includes two different tasks: a) revise the data value implementation in WSMO4J, and b) refactor the WSMO4J exceptions. The current exception mechanism is not very consistent. We need to check whether we need more fine grained exceptions in addition to the existing ones and whether the use of the exceptions is always appropriate.

Contact: barry.bishop@sti2.at

WSMO4JWorkspaces

Description: Develop and implement workspaces in WSMO4J

Contact: barry.bishop@sti2.at

Benchmark for Query Optimization in IRIS Reasoner

Description: As part of IRIS (Integrated Rule Inference System) project, we will be developing different optimization techniques for efficient machine reasoning. IRIS is a reasoner based on a well-studied rule-based knowledge representation called Datalog. IRIS implements a couple of evaluation strategies (e.g., naïve and semi-naïve method, QSQ) as well as optimization techniques (e.g., Magic Sets). A close link between Datalog, relational algebra and SQL allows us to use some efficient database techniques and apply them to reasoners in order to achieve good performance. There is a number of optimization techniques developed for Relational Database Management Systems (RDBMS). However these techniques are developed rather for existential knowledge (knowledge that is present in a database), and are not directly applicable to intensional knowledge (i.e., knowledge that is contained in logical rules). We are investigating different query cost models based on heuristics and statistics trying to implement an efficient query optimizer in IRIS. Hence the aim of this thesis is: To develop and implement a benchmark which will be used for comparing the performance of different optimization techniques that arebeing developed for IRIS; Evaluation of the benchmark results; To address questions which could possibly lead to new optimization techniques.

Contact: barry.bishop@sti2.at

Evaluating Database Techniques for Reasoning with Large Datasets

Description: IRIS (Integrated Rule Inference System) project is a reasoner based on a well-studied rule-based knowledge representation called Datalog. Currently IRIS is a main-memory system. This means that there is no possibility to persist results of the reasoning computation (i.e., intensional data). We are working on a tight integration of IRIS and an RDBMS (for Relational Database Management Systems). It is possible to utilize mature database techniques, particularly caching and buffering techniques, in order to enable effective reasoning with large datasets. Therefore our intention is to adapt and extend these techniques and implement them in IRIS. Hence the aim of this thesis is: 1) To develop and implement a benchmark which will be used for comparing the performance of different memory management techniques that are being developed for IRIS; 2) Evaluation of the benchmark results; 3) To address questions which could possibly lead to new caching and buffering techniques suitable for reasoning with large datasets.

Contact: barry.bishop@sti2.at