American University of Sharjah


The international conference on computer systems and industrial informatics (ICCSII)

Keynote Speakers

Next Generation Massive Data Centers for Cloud Computing Applications

 Prof. Mounir Hamdi - profile

   IEEE Fellow, Head and Chair Professor, Department Computer Science and Engineering

   Hong Kong University of Science and Technology

   Abstract: Data center infrastructure design has recently been receiving significant   research    interest both from academia and industry, in no small part due to the growing importance of data centers in supporting and sustaining the rapidly growing web-based applications including search (e.g., Google, Bing), video content hosting and distribution (e.g., YouTube, NetFlix), social networking (e.g., facebook, twitter), and large-scale computations (e.g., data mining, bioinformatics, indexing).

Today's data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. For example, the Microsoft Live online services are supported by a Chicago-based data center, which is one of the largest data centers ever built, spanning more than 700,000 square feet, and Google has more than 1 Million servers.

As a result, the architecture of the network interconnecting the servers has a significant impact on the agility and reconfigurability of the data center infrastructure to respond to changing application demands and service requirements. Traditionally data center networking was based around top of rack (ToR) switches interconnected through end of rack (EoR) switches, and these in turn are being connected through core switches. This approach, besides being very costly, leads to significant bandwidth oversubscription towards the network core. This prompted several researchers to suggest alternate approaches for scalable cost-effective network infrastructures, based on topologies including Fat-Tree, DCell, BCube, MDCube, and Clos network.

In this talk, we detail the trends and challenges in designing massive data centers. We will highlight the research efforts being undertaken by the academic and industrial communities to address these challenges. Second, we present some of our own solutions by leveraging the key data traffic patterns and web-applications in achieving scalable and cost effective solutions to the design of massive data centers infrastructures. Finally, we address some of the killer applications that are driving the needs for data centers and cloud computing in general.

Energy Efficient Computing - from Smartphones to Servers

Dr Sarma Vrudhula Dr. Sarma Vrudhula - profile

 School of Computing, Informatics and Decisions Systems Engineering

 Arizona State University, Tempa AZ

 Abstract: Multi-core processors have become the de facto standard of computing systems in  all market segments: smartphones, laptops, desktop PCs, and servers in datacenters. This  shift took place as a solution to the problem of soaring power consumption of single core  processors. As this trend continues, and processors with many hundreds of cores are planned, the industry is once again facing the problem of soaring power dissipation. In contrast to single core processors, for multi-core processors with even a few tens of cores, it will not be technically feasible or economically justifiable to design a package that can dissipate the maximum possible heat. Packages can only target the average power consumption. Furthermore, large variations in software workload and manufacturing processes will result in greater spatial and temporal variations in power density than with earlier processors. As an increasing number of multi-core processors are incorporated into servers, and thousands of servers are housed in data centers, and the number of data centers worldwide continues to grow rapidly, the energy consumption of data centers is becoming enormous. For these reasons, maximizing the energy efficiency of multi-core processors has become one of the most important and challenging problems facing the IT industry.

This talk will first outline the basic technical challenges of energy efficient computing across different market segments. As both energy aware design (static methods) and energy efficient operation (dynamic control) are required, the talk will present an overview of energy aware design methods at the device and circuit level, physical layout, architecture and compiler design, and energy conscious application software. Then it will focus on system level dynamic thermal management (DTM).

In general, DTM is a complex, multi-dimensional, constrained optimization problem, with objective functions, control variables, and constraints imposed by both the underlying technology and by the target market. The talk will present a framework for solving such optimization problems. It will highlight the challenges and solutions to problems involved in computing on-line, the voltage and frequency schedule (dynamic voltage and frequency scaling or DVFS)), dynamic thread or task migration, and active control of the cooling. Both open loop and closed loop solutions will be described. The challenges and a solution to integrating the closed-loop solutions within the OS will be presented. The talk will conclude with experimental evidence of the value of run-time, optimal control of multicore processors integrated within an operating system.