File Name: cloud grid and high performance computing emerging applications .zip
Continuing to stretch the boundaries of computing and the types of problems computers can solve, high performance, cloud, and grid computing have emerged to address increasingly advanced issues by combining resources. Cloud, Grid and High Performance Computing: Emerging Applications offers new and established perspectives on architectures, services and the resulting impact of emerging computing technologies. Intended for professionals and researchers, this publication furthers investigation of practical and theoretical issues in the related fields of grid, cloud, and high performance computing.
Cloud computing has emerged as the natural successor of the different strands of distributed systems - concurrent, parallel, distributed, and Grid computing. Like a killer application, cloud computing is causing governments and the enterprise world to embrace distributed systems with renewed interest. In evolutionary terms, clouds herald the third wave of Information Technology, in which virtualized resources platform, infrastructure, software are provided as a service over the Internet.
This economic front of cloud computing, whereby users are charged based on their usage of computational resources and storage, is driving its current adoption and the creation of opportunities for new service providers. As can be gleaned from press releases, the US government has registered strong interest in the overall development of cloud technology for the betterment of the economy.
This approach follows a global vision in which users plug their computing devices into the Internet and tap into as much processing power as needed.
Cloud technology comes in different flavors: public, private, and hybrid clouds. Public clouds are provided remotely to users from third-party controlled data centers, as opposed to private clouds that are more of virtualization and service-oriented architecture hosted in the traditional settings by corporations. It is obvious that the economies of scale of large data centers vendors like Google offer public clouds an economic edge over private clouds.
However, security issues are a major source of concerns about public clouds, as organizations will not distribute resources randomly on the Internet, especially their prized databases, without a measure of certainty or safety assurance. In this vein, private clouds will persist until public clouds mature and garner corporate trust. The embrace of cloud computing is impacting the adoption of Grid technology. The perceived usefulness of Grid computing is not in question, but other factors weigh heavily against its adoption such as complexity and maintenance as well as the competition from clouds.
However, the Grid might not be totally relegated to the background as it could complement research in the development of cloud middleware Udoh, In that sense, this book considers and foresees other distributed systems not necessarily standing alone as entities as before, but largely subordinate and providing research stuff to support and complement the increasingly appealing cloud technology.
The new advances in cloud computing will greatly impact IT services, resulting in improved computational and storage resources as well as service delivery. To keep educators, students, researchers, and professionals abreast of advances in the cloud, Grid, and high performance computing, this book series Cloud, Grid, and High Performance Computing: Emerging Applications will provide coverage of topical issues in the discipline.
It will shed light on concepts, protocols, applications, methods, and tools in this emerging and disruptive technology. The book series is organized in four distinct sections, covering wide-ranging topics: 1 Introduction 2 Scheduling 3 Security and 4 Applications. Section I , Introduction , provides an overview of supercomputing and the porting of applications to Grid and cloud environments. Cloud, Grid and high performance computing are firmly dependent on the information and communication infrastructure.
The different types of cloud computing - software-as-a-service SaaS , platform-as-a-service PaaS , infrastructure-as-a-service IaaS , and the data centers exploit commodity servers and supercomputers to serve the current needs of on-demand computing. The chapter Supercomputers in Grids by Michael Resch and Edgar Gabriel, focuses on the integration and limitations of supercomputers in Grid and distributed environments.
It emphasizes the understanding and interaction of supercomputers as well as its economic potential as demonstrated in a public-private partnership project. As a matter of fact, with the emergence of cloud computing, the need for supercomputers in data centers cannot be overstated.
In a similar vein, Porting HPC Applications to Grids and Clouds by Wolfgang Gentzsch guides users through the important stages of porting applications to Grids and clouds as well as the challenges and solutions.
This chapter equally gave an overview of future prospects of building sustainable Grid and cloud applications. To simplify the development of Grid applications, the researchers developed JGRIM, which easily Gridifies Java applications by separating functional and Grid concerns in the application code. JGRIM simplifies the process of porting applications to the Grid, and is competitive with similar tools in the market. Section II , Scheduling , is a central component in the implementation of Grid and cloud technology.
Efficient scheduling is a complex and an attractive research area, as priorities and load balancing have to be managed. Sometimes, fitting jobs to a single site may not be feasible in Grid and cloud environments, requiring the scheduler to improve allocation of parallel jobs for efficiency.
In Moldable Job Allocation for Handling Resource Fragmentation in Computational Grid , Huang, Shih, and Chung exploited the moldable property of parallel jobs in formulating adaptive processor allocation policies for job scheduling in Grid environment. In a series of simulations, the authors demonstrated how the proposed policies significantly improved scheduling performance in heterogeneous computational Grid.
In another chapter, Speculative Scheduling of Parameter Sweep Applications Using Job Behavior Descriptions , Ulbert, Lorincz, Kozsik, and Horvath demonstrated how to estimate job completion times that could ease decisions in job scheduling, data migration, and replication.
The authors discussed three approaches of using complex job descriptions for single and multiple jobs. The new scheduling algorithms are more precise in estimating job completion times. Furthermore, some applications with stringent security requirements pose major challenges in computational Grid and cloud environments. To address security requirements, in A Security Prioritized Computational Grid Scheduling Model: An Analysis , Rekha Kashyap and Deo Vidyarthi proposed a security aware computational scheduling model that modified an existing Grid scheduling algorithm.
The proposed Security Prioritized MinMin showed an improved performance in terms of makespan and system utilization. This natural selection and evolution method optimizes scheduling in computational Grid by minimizing turnaround time. The developed model, which compared favorably to existing models, was used to simulate and evaluate clusters to obtain the one with minimum turnaround time for job scheduling.
As the cloud environments expand to the corporate world, improvements in GA methods could find use in some search problems. Section III , Security , is one of the major hurdles cloud technology must overcome before any widespread adoption by organizations.
Cloud vendors must meet the transparency test and risk assessment in information security and recovery. Falling short of these requirements might leave cloud computing frozen in private clouds.
Preserving user privacy and managing customer information, especially personally identifiable information, are central issues in the management of IT services. Wolfgang Hommel, in the chapter A Policy-based Security Framework for privacy-enhancing Data Access and Usage Control , discusses how recent advances in privacy enhancing technologies and federated identity management can be incorporated in Grid environments.
The chapter demonstrates how existing policy-based privacy management architectures could be extended to provide Grid-specific functionality and integrated into existing infrastructures demonstrated in an XACML-based privacy management system.
In Adaptive Control of Redundant Task Execution for Dependable Volunteer Computing , Wang, Takizawa, and Kobayashi examined the security features that could enable Grid systems to exploit the massive computing power of volunteer computing systems. The authors proposed the use of cell processor as a platform that could use hardware security features. To test the performance of such a processor, a secure, parallelized, K-Means clustering algorithm for a cell was evaluated on a secure system simulator.
The findings point to possible optimization for secure data mining in the Grid environments. To further provide security in Grid and cloud environments, Shreyas Cholia and Jefferson Porter discussed how to close the loopholes in the provisioning of resources and services in Publication and Protection of Sensitive Site Information in a Grid Infrastructure. The authors analyzed the various vectors of information being published from sites to Grid infrastructures, especially in the Open Science Grid, including resource selection, monitoring, accounting, troubleshooting, logging, and site verification data.
Best practices and recommendations were offered to protect sensitive data that could be published in Grid infrastructures.
Authentication mechanisms are common security features in cloud and Grid environments, where programs inter-operate across domain boundaries. Public key infrastructures PKIs provide means to securely grant access to systems in distributed environments, but as PKIs grow, systems become overtaxed to discover available resources especially when certification authority is foreign to the prevailing environment.
Mobile Grid systems and its security are a major source of concern, due to its distributed and open nature. Furthermore, Noordende, Olabarriaga, Koot, and Laat developed a trusted data storage infrastructure for Grid-based medical applications.
In Trusted Data Management for Grid-Based Medical Applications , while taking cognizance of privacy and security aspects, they redesigned the implementation of common Grid middleware components, which could impact the implementation of cloud applications as well. Section IV , Applications , are increasingly deployed in the Grid and cloud environments.
The architecture of Grid and cloud applications is different from the conventional application models and, thus requires a fundamental shift in implementation approaches. Cloud applications are even more unique as they eliminate installation, maintenance, deployment, management, and support. These cloud applications are considered Software as a Service SaaS applications.
Grid applications are forerunners to clouds and are still common in scientific computing. Phylogenetic data analysis is known to be compute-intensive and suitable for high performance computing. The authors improved upon an existing sequential and parallel AxParafit program, by producing an efficient tool that facilitates large-scale data analysis. A free client tool is available for co-phylogenetic analysis.
In chapter Persistence and Communication State Transfer in an Asynchronous Pipe Mechanism by Philip Chan and David Abramson, the researchers described distributed algorithm for implementing dynamic resource availability in an asynchronous pipe mechanism that couples workflow components. Here, fault-tolerant communication was made possible by persistence through adaptive caching of pipe segments while providing direct data streaming.
Ashish Agarwal in another chapter, Self-Configuration and Administration of Wireless Grids , described the peculiarities of wireless Grids such as the complexities of the limited power of the mobile devices, the limited bandwidth, standards and protocols, quality of service, and the increasingly dynamic nature of the interactions involved. To meet these peculiarities, the researcher proposed a Grid topology and naming service that self-configures and self-administers various possible wireless Grid layouts.
Lai, Wu, and Lin described the effective utilization of P2P Grids in efficient scheduling of jobs by examining a P2P communication model. The model aided job migration technology across heterogeneous systems and improved the usage of distributed computing resources. On the other hand, Gu, Zhang, and Pung dwelt on facilitating efficient search for data in distributed systems using an ontology-based peer-to-peer network. Here, the researchers grouped together data with the same semantics into one-dimensional semantic ring space in the upper-tier network.
In the lower-tier network, peers in each semantic cluster were organized as chord identifier space. The authors demonstrated the effectiveness of the proposed scheme through simulation experiment.
In this final section, there are other chapters that capture the research trends in the realm of high performance computing. In computational Grid and cloud resource provisioning, memory usage may sometimes be overtaxed.
Although RAM Grid can be constrained sometimes, it provides remote memory for the user nodes that are short of memory. Researchers Rui Chu, Nong Xiao, and Xicheng Lu, in the chapter Push-based Prefetching in Remote Memory Sharing System , propose the push-based prefetching to enable the memory providers to push the potential useful pages to the user nodes.
With the help of sequential pattern mining techniques, it is expected that useful memory pages for prefetching can be located.
The authors verified the effectiveness of the proposed method through trace-driven simulations. The protocol exploits channel diversity and a medium access control method in ensuring the quality of service requirement. IP telephony has emerged as the most widely used peer-to-peer-based application.
Although success has been recorded in decentralized communication, providing a scalable peer-to-peer-based distributed directory for searching user entries still poses a major challenge. In conclusion, cloud technology is the latest iteration of information and communications technology driving global business competitiveness and economic growth. Although relegated to the background, research in Grid technology fuels and complements activities in cloud computing, especially in the middleware technology.
In that vein, this book series is a contribution to the growth of cloud technology and global economy, and indeed the information age. Buy Hardcover. Add to Cart. More Information. MLA Udoh, Emmanuel. IGI Global,
Namer Ali Al Etawi. International Journal of Computer Applications 32 , April Three of most well-known computing paradigms are considered throughout this research. These are: cluster, grid, and cloud computing paradigms. Each of the three paradigms is defined, architecture is considered, areas of applications of each paradigm are explored, and advantages and disadvantages are listed.
HPC cases are typically complex computational problems that require parallel-processing techniques. To support the calculations, a well-architected HPC infrastructure is capable of sustained performance for the duration of the calculations. HPC workloads span traditional applications, like genomics, computational chemistry, financial risk modeling, computer aided engineering, weather prediction and seismic imaging, as well as emerging applications, like machine learning, deep learning, and autonomous driving. Still, the traditional grids or HPC clusters that support these calculations are remarkably similar in architecture with select cluster attributes optimized for the specific workload. In AWS, the network, storage type, compute instance type, and even deployment method can be strategically chosen to optimize performance, cost, and usability for a particular workload. HPC is divided into two categories based on the degree of interaction between the concurrently running parallel processes: loosely coupled and tightly coupled workloads. Tightly coupled HPC cases are those where the parallel processes are simultaneously running and regularly exchanging information between each other at each iteration or step of the simulation.
Build your business on the best of cloud and on-premise together with Hybrid Cloud Infrastructure solutions. NetApp is the proven leader when it comes to modernizing and simplifying your storage environment. Our industry-leading solutions are built so you can protect and secure your sensitive company data.
Skip to search form Skip to main content You are currently offline. Some features of the site may not work correctly. DOI: Udoh Published Computer Science. Existing Grid technology has been foremost designed with performance and scalability in mind.
Continuing to stretch the boundaries of computing and the types of problems computers can solve, high performance, cloud, and grid computing have emerged to address increasingly advanced issues by combining resources. Cloud, Grid and High Performance Computing: Emerging Applications offers new and established perspectives on architectures, services and the resulting impact of emerging computing technologies. Intended for professionals and researchers, this publication furthers investigation of practical and theoretical issues in the related fields of grid, cloud, and high performance computing. Cloud computing has emerged as the natural successor of the different strands of distributed systems - concurrent, parallel, distributed, and Grid computing. Like a killer application, cloud computing is causing governments and the enterprise world to embrace distributed systems with renewed interest. In evolutionary terms, clouds herald the third wave of Information Technology, in which virtualized resources platform, infrastructure, software are provided as a service over the Internet. This economic front of cloud computing, whereby users are charged based on their usage of computational resources and storage, is driving its current adoption and the creation of opportunities for new service providers.
Si. Punqui. - Панк.
В воздухе ощущался едва уловимый запах озона. Остановившись у края люка, Сьюзан посмотрела .
Your email address will not be published. Required fields are marked *