Search for contacts, projects,
courses and publications

System Support for Distributed Dynamic Content Web Services

People

 

Pedone F.

(Responsible)

External participants

Zwaenepoel Willy

(Third-party responsible)

Abstract

Many current Web services are based on dynamic content. Users connect to a Web site and receive customized information, depending, for example, on their preferences and access patterns. In a centralized dynamic-content Web site, information is most often stored in a database. However, to improve performance and increase availability, more and more such systems are relying on replicated information across large-area networks. User requests are executed on “nearby” replicas, thereby avoiding long roundtrip delays, and distributing the load over the replicas. Moreover, the failure of a database does not render the service unavailable since information can still be retrieved from other replicas. These improvements do not come without a cost though: replicating the database across a wide-area network requires keeping the information consistent, a challenging task when efficiency is a major requirement.

The overall goal of this project is to design and implement a highly-efficient and highly-available data management middleware to be used as the underlying infrastructure of modern dynamic content-based Web services. Reaching such a goal involves two main efforts: defining consistency criteria adapted to large-scale distributed systems, but still meaningful to the users, and coming up with the infrastructure to efficiently implement them. Database strong consistency has traditionally been achieved through one-copy serializability (1SR). Despite all the efforts to efficiently implement 1SR, some of its performance costs are inherent to the ensured guarantees. Database providers have realized this and come up with alternatives more efficiently implementable and still useful to the users, such as Snapshot Isolation (SI). Snapshot Isolation has been defined for centralized databases, however, and applying it in a large-scale environment requires revisiting some of its properties. Moreover, implementing a consistency criteria efficiently in a large-scale environment requires complex replication management. Replica management has been traditionally implemented using the primary-backup or the state machine approaches. Both need expensive synchronization primitives to be implemented (e.g., group membership, atomic broadcast). Instead of relying on general-purpose replica management approaches, we intend to design novel algorithms that take semantic information into account (e.g., Snapshot Isolation) to improve their performance in large-scale environments, a technique that has been proven useful to us in small-scale systems (e.g., local-area networks).

Additional information

Start date
01.01.2008
End date
31.03.2008
Duration
3 Months
Funding sources
SNSF
External partners
Beneficiario principale Prof. Zwaenepoel, EPFL
Status
Ended
Category
Swiss National Science Foundation / Project Funding / Mathematics, Natural and Engineering Sciences (Division II)