Submitted Papers

 

Overview

 

The paper submission deadlines have now passed. A list of submitted papers with abstracts is below.

 

 

 

Abstracts of Accepted Papers

 

Theme: Computing Clouds with Telecom Grade “Trust” and Global Interoperability

 

The combination of hardware assisted virtualization and the broadband Internet have taken the Information Technology (IT) hosted managed services to a next level of evolution, where the software applications have become independent of the hardware infrastructure and can be migrated at will.  This introduces two key issues that need to be addressed to fully leverage the potential that the new servers and virtualization offer:

 

 

The second international IEEE Workshop On collaboration & Cloud Computing under the aegis of 19th IEEE International Workshops on Enabling Technologies: Infrastructures for Collaborative Enterprises (WETICE) 2010, is devoted to address the operation and management issues in the next generation cloud computing.  Seven papers discuss various aspects of bringing telecom grade “trust” and global interoperability to distributed collaborating computing clouds.

 

Next generation Cloud Computing Architecture, Enabling real-time dynamism for shared distributed physical infrastructure - Vijay Sarathy, Purnendu Narayan and Rao Mikkilineni PhD Kawa Objects, Inc.

 

 

Cloud computing is fundamentally altering the expectations for how and when computing, storage and networking resources should be allocated, managed and consumed. End-users are increasingly sensitive to the latency of services they consume. Service Developers want the Service Providers to ensure or provide the capability to dynamically allocate and manage resources in response to changing demand patterns in real-time. Ultimately, Service Providers are under pressure to architect their infrastructure to enable real-time end-to-end visibility and dynamic resource management with fine-grained control to reduce total cost of ownership while also improving agility.
 
The current approaches to enabling real-time, dynamic infrastructure are inadequate, expensive and not scalable to support consumer mass-market requirements.   Over time, the server-centric infrastructure management systems have evolved to become a complex tangle of layered systems designed to automate systems administration functions that are knowledge and labor intensive. This expensive and non-real time paradigm is ill suited for a world where customers are demanding communication, collaboration and commerce at the speed of light. Thanks to hardware assisted virtualization, and the resulting decoupling of infrastructure and application management, it is now possible to provide dynamic visibility and control of services management to meet the rapidly growing demand for cloud-based services.

What is needed is a rethinking of the underlying operating system and management infrastructure to accommodate the ongoing transformation of the data center from the traditional server-centric architecture model to a cloud or network-centric model. This paper proposes and describes a reference model for a network-centric datacenter infrastructure management stack that borrows and applies key concepts that have enabled dynamism, scalability, reliability and security in the telecom industry, to the computing industry. Finally, the paper will describe a proof-of-concept system that was implemented to demonstrate how dynamic resource management can be implemented to enable real-time service assurance for network-centric datacenter architecture.

 

[download paper]

 

[download presentation]

Enterprise Usability of Cloud Computing: Issues and Challenges - Pankaj Goyal, PhD, MicroMega Inc. Denver, USA

 

 

Cloud Computing Environments (CCE) enable business agility and enterprises to exploit/respond quickly to change in the market place, competition, technlogy and operation environment.  For enterprises, CCE present a number of issues and challenges.  Some of these include the inability to take advantage of the dynamic resource elasticity of the CCE, issues of data fragmentation and duplication, the challenge of migrating transactional systems to a CCE, or the challenges of utilizing the SaaS applications in conjynction with other enterprise applications, say, to implement an end-to-end process integration.  Enterprises, also, have extended partnerships and aim to extend integration of participating systems.  This paper presents some of these challenges and mechanisms to alleviate some of these challenges of operating and managing a hybrid CCE.  The paper also presents mechanism for robust network interconnectivity and partitioning mechanisms to co-isolate cooperating systems.

 

[download paper]

 

[download presentation]

Application Development: Fly to the Clouds or Stay in-House? - Matina Bibi, Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki, Greece & Dimitrios Katsaros and Panayiotis Bozanis, Department of Computer & Communications Engineering, University of Thessaly, Volos, Greece

 

 

Cloud computing is a recent trend in IT that moves computing and data away from desktop and portable PCs into large data centers, and outsources the “applications” (hardware and software) as services over the Internet. Cloud computing promises to increase the velocity with which applications are deployed, increase innovation, and lower costs, all while increasing business agility. But, is the migration to the Cloud the most profitable option for every business? This article presents a study of the basic parameters for estimating the potential costs deriving from building and deploying applications on cloud and on-premise assets.

 

[download paper]

 

[download presentation]

CloudGauge: A Dynamic Cloud and Virtualization Benchmarking Suite - Mohamed A. El-Refaey, Prof. Mohamed Abu Rizkaa, Arab Academy for Science, Technology and Maritime Transport College of Computing & Information Technology, Cairo – Egypt

 

 

Cloud Computing and virtualization technology are taking the momentum nowadays in data centres, research institutions and IT infrastructure models. Having a reference and performance benchmarks for the dynamic workloads in virtualized and cloud environments is a vital thing to develop. Virtualized workload benchmark will help data center’s architects and administrators to design an elegant data center’s architectural models and establish a workload balance between virtualized and consolidated servers, according to the workload’s analysis and characterization produced by this reference benchmark.


In this paper, we will present an overview about CloudGauge, a dynamic and experimental benchmark for virtualized and cloud environments, and the key requirements and characteristics of virtual systems’ performance metrics, evaluation and workload’s characterization which can be considered a step further to implement a production-ready virtual systems and cloud benchmarking suite and performance models.

 

[download paper]

 

[download presentation]

Using Virtualization to Prepare Your Data Center for “Real-time Assurance of Business Continuity - Rao Mikkilineni PhD, KawaObjects Inc. Los Altos, CA & Gopal Kankanhalli, VirtualXL, San Jose, USA.

 

 

This paper describes our experience using the dynamic resource reallocation capabilities offered by virtualization technologies, in implementing:

 

  •  Automation of end-to-end failover of mission critical virtualized application, a SAN network and EMC Clarion based storage to a remote site and
  • On-demand and scheduled assurance of failover and measurement of RPO and RTO (The Recovery Point Objective (RPO) is the point in time to which you must recover data as dictated by business needs.  Recovery Time Objective (RTO) is the period of time after an outage in which the application and its data must be restored to a predetermined state defined by RPO.)

 

By creating an application to spindle resource utilization profile through various management systems, and utilizing a combination of server, network- and storage- virtualization technologies, application specific RTO and RPO objectives (defined based on workload profiles and business priorities) are met in a technology agnostic and multi-vendor environment. 

 

 

[download paper]

 

[download presentation]

Creating Next Generation Cloud Computing Operation Support Services by Social OSS: contribution with Telecom NGN experience - Miyuki Sato, Fujitsu Co. Ltd, Japan

 

 

Currently, Cloud network operators and service providers are managing server, network and storage resources by various management systems to manage hardware, applications, traffic, and resources monitoring including bandwidth, storage capacity, and throughput utilization. However, these systems are not coordinated with accounting, security and configuration systems to provide end-to-end service management.  Various players are emerging to provide remote computing and storage resources over the cloud network.. Telecommunications grade operation systems will be the key to realize end-to-end service management, and resource management to assure application quality of service. In this paper we present how the operation service methodology using Social Cloud OSS will support end-to-end service management and resource (computing, network and storage) management over the cloud network. Fujitsu’s Social Cloud OSS will provide integrated operation services by supporting group to group communication, and Enterprise to Enterprise (E2E) application transactions.  Social Cloud OSS is designed to support social collaboration, and e-commerce applications using the cloud on a massive scale and lower Total Cost of Ownership than the current approaches with disparate management systems.

 

[download paper]

 

[download presentation]

A Framework of Scientific Workflow Management Systems for Multi-Tenant Cloud Orchestration Environment -Bhaskar Prasad Rimal, Kookmin University Seoul, Korea & Mohamed A. El-Refaey, Arab Academy for Science, Technology and Maritime Transport, College of Computing & Information Technology, Cairo, Egypt

 

 

The volumetric growth of data complexity is increasing day by day. The computational world is becoming massive and needs scalable and efficient systems. Data management is an important stage to accelerate the petabytes processing. Such environment requires scientific work flows to compose and manage those volumetric data sets. Scientific Workflow is different than general workflow. It deals with scheduling, algorithms, data flow, process, operational procedures and specially data intensive systems. The popularity of Software as a Service (SaaS) is due to the feature of multi-tenancy. The study of scientific workflow in the context of multi-tenant cloud orchestration environment provides control flow, data flow, and new requirements of system development and discovery of new services. We explore a framework of scientific workflow for multi-tenant cloud orchestration environment that deals with semantic based workflow as well as policy-based workflow.

 

[download paper]

 

[download presentation]

A Social Network Based- Enhanced Learning System - M. Angelaccio and B. Buttarazzi, Department of Computer Science S&P, University of Rome Tor Vergata, Italy

 

 

The availability of p2p distributed systems on wireless networks makes possible to explore new e-learning paradigms for adapting learning workflow to be more interactive. In this work we give a preliminary description of EduSHARE that is a p2p-based quiz management system running on wireless ad hoc network and supporting traditional learning workflow with a dynamic interactive quiz sharing and evaluation system.

 

[ download paper ]

 

[ download presentation ]