Saturday, February 28, 2009

Monitoring, Management and Service Frameworks

Written by Jon Greaves

Since the first computers entered server rooms, the need to monitor them has been well understood. Earliest forms of monitoring were as simple as status lights attached to each module showing if it was powered up or in a failed state. Today’s datacenter is still awash with lights, with the inside joke being that many of these are simply “randomly pleasing patterns” and in all honestly, providing very little use.

In 1988, RFC1065 was released. Request for Comment (RFC), allowed like-minded individuals to band together and build standards. RFC’s - typically under the umbrella of organizations like the Internet Engineering Task Force (IETF), 1065 and two sister RFC’s - outline a protocol “Simple Network Management Protocol” (SNMP) and a data structure Management Information Base (MIB). SNMP was originally focused on network devices, but its value was soon realized covering all connected IT assets including servers.

Today, SNMP has been through three major releases and is still a foundation for many monitoring solutions.

At the highest-level, three forms of monitoring exist today:

  1. Reactive – a device (server, storage, network, etc.) sends a message to a console when something bad happens

  2. Proactive – the console asks the device if it is healthy

  3. Predictive – based on a number of values, the health of a device is inferred

Each of the above has pros and cons. For example, reactive monitoring tends to offer the most specific diagnostics, e.g. my fan is failing. One scenario exists which limits this as your only solution. Should the device die, or fall off the network, it will not generate messages. Since the console is purely reacting to messages, it is not able to determine if the device is alive and well, or completely dead. This is a major flaw in reactive monitoring solutions.

Proactive on the other hand, has the console polling the device at predetermined intervals. During each poll the console asks the device a number of questions to gauge its health and function. This solves the issue of reactive monitoring, but creates significantly more network traffic and load on the device. In fact, cases have occurred where devices have been hit so hard, they cannot operate.

So what typically happens, is reactive monitoring is paired with proactive polling to resolve this issue. You get the benefits of both solutions and negate the disadvantages.

While reactive and predictive monitoring may be the norm today, they still leave computer systems vulnerable to outages. As complexity continues to grow, a different approach to monitoring is needed. Two very interesting fields of research - prognostics and autonomics - are emerging to take on these challenges.

Prognostics make use of telemetry to look for early signs of failures often by applying complex mathematical modules. These modules take into account many streams of data and not only look at directly correlated failure conditions, but also what might best be described as the harmonics of a system. For example, by looking at the frequency of alarms and health data from multiple components of a system, small variations can be detected which can lead to failures.


This approach has been used with great success in other industries. The Commercial Nuclear Industry has deployed such an approach to help detect issues and false alarms. False alarms can result in the shutdown of a facility and cost millions of dollars per day. We also see many military applications for this kind of advanced monitoring including the next generation battle field systems and the joint strike fighter where thousands of telemetry streams are analyzed real-time to look for issues that could impact a mission.

While these applications seem far-fetched from the problems of monitoring today’s computer systems, several companies have made huge advances in this technology. Most notably, Sun Microsystems, who has used such approaches in several high end servers to not only detect pending hardware failures, but also applied to software to look for Software Aging where memory leaks, run away threads and general software bloat can lead to outages of long running applications. Pair detection of aging with “software rejuvenation” where applications are periodically cleansed, and large improvements in application availability can be realized.

Autonomics and autonomic computing can also be applied to these challenges to allow IT infrastructure to take corrective action to prevent outages and optimize application performance. Autonomic Computing is an initiative started in early 2001 by IBM, with the goal of helping manage complex distributed systems. This tends to manifest itself in tools implemented as decision trees, mimicking the actions a system admin might perform to correct issues before they become outages. Academia is leading the charge in this area with key projects in super computing centers where scale and complexity requires a new approach to attack this problem.

With the advances in systems monitoring and management also comes new kinds of risks - some of which can come from seemingly harmless data. Let’s take the example of a publicly traded company. This company outsources the hosting and management of its infrastructure. The application management company enables monitoring, the customer is careful to exclude any sensitive data from what’s being monitored. The customer just allows the basic data collected reporting on memory, disk, network and CPU. From first impression, this seems like harmless data.

Each quarter as the company closes its books, its CRM and ERP systems (both monitored) crunch the quarter’s data. For Q1, the customer has a great quarter as publicly disclosed in filings. The provider monitoring their environment now has a benchmark that one could infer transactional volume based on disk I/O, memory and CPU utilization. But let’s say the customer misses their numbers in Q2. Now, the provider has data that can infer a bad quarter. As Q3 is in the process of closing, and before the CFO has even seen the results - armed with just basic performance data from CPU, Memory and Disk - the hosting provider can now, in theory, predict the quarter’s results.

This simplistic scenario highlights the value of telemetry, even that which seems low risk in the future. As our ability to infer failures, performance, and eventually business results grows, new kinds of risks will emerge, requiring mitigation.

To this point we have focused on what basically is “node level” monitoring, i.e., the performance of a server or other piece of IT infrastructure and its health alone. This is, and will likely always be, the foundation for managing IT systems. However, it does not tell the full story - arguably the most important factor in today’s environments - of how the business processes supported by the infrastructure are performing.

IT Service Management focuses on the customer’s experience of a set of IT systems as defined by their business functions. For example, assume a customer has a CRM system deployed. While the servers may be reporting a healthy status, if the application has been misconfigured or a batch process is hung, the end user would be experiencing degraded operations while a traditional monitoring solution is likely to be reporting the system functioning and “green”. Taking an IT Service Management approach, the CRM solution would be modeled showing the service dependencies (e.g., depends on web, application and database tiers and requires network, servers and storage to be functioning). This model is then enhanced by simulated end user transactions and application performance metrics to identify issues outside the availability of the core IT infrastructure and statistics from an IT service desk. This holistic approach to monitoring provides greater visibility to CIO’s, typically expressed as a dashboard of how their IT investment is performing from their user community’s view.


Virtualization technology and its use to enable cloud computing has opened up many opportunities for organizations to realize the agility we all seek when it comes to our IT investments. Virtualization also has not simplified the administration of IT as was originally promised – instead, it has greatly increased it. Case in point, look at an example of a typical use of virtualization - server consolidation. Pre-consolidation, each server had a function, typically supported by a single operating system image running on bare metal. Should the server or operating system experience a problem, it was easy to uniquely identify the issue and initiate an incident handling process to remediate. In a consolidated environment, a single server may be running 10’s of virtual machines, each with their own unique function. These virtual machines may also be migrated between physical servers in an environment. Traditional monitoring solutions were not designed with the concept that a resource may move dynamically or even are offline for the time it’s not needed, and started on demand.

Now, taking the extreme of virtualization to the next logical level - cloud computing - today’s monitoring tools are taxed even more. Your servers are now hosted in an infrastructure/platform, and as a service provider, you have even less control of your resources. This hasn’t gone unnoticed by providers. In fact, over the past month, several monitoring consoles have been released (including for Amazon EC2) to start addressing this challenge. Independent solutions are also appearing, most noticeably Hyperic who launched http://www.cloudstatus.com/ where you can view Amazon and GoogleApp Engine’s availability by using “proactive monitoring”. The natural evolution will be these tools interfacing with more traditional solutions to give companies more holistic views of their environment. This takes an old concept of “Manager of Managers” to the next level.

Today’s computing architectures are really taxing the foundations of monitoring solutions. This does, however, create great opportunities for tools vendors and solution providers to attack. More so, it also brings more to the focus, the idea of IT Service Management where understanding the end users performance, expectations and mapping back to SLA’s becomes the norm.

A brief interview with Javier Soltero, co-founder and CEO of Hyperic, the leader in multi-platform, open source IT management.


Question(s) and Answer(s)


Q. Monitoring is typically seen as the last step of any deployment, often not considered during the development. Do you see customers embracing a tighter coupling of the entire software lifecycle with engineering IT Service Management Solutions?

Absolutely, it’s a very encouraging trend especially among SaaS companies and other business that are heavily dependent on their application performance. The really successful ones spend time building a vision for how they want to manage the service. That vision then helps them select which technologies they use and how they use them. Companies that build instrumentation into their apps have an easier time managing their application performance and will resolve issues faster.

Q. Customers are really embracing IT Service Monitoring as a key element to not only understand performance but also ROI for IT investments, what challenges do you see for customers to adopt these technologies?

The biggest challenge we see is the customer’s ability to extract the right insight from the vast amount of data available. The usability of these products also tends to make the task of figuring out things like ROI and other business metrics difficult. Oftentimes a tool that can successfully collect and manage the massive amounts of data required to dig deep into performance metrics lacks an analytics engine capable of displaying the data in an insightful way, and vice versa.

Q. End user monitoring has typically been delivered with synthetic transactions, this has certainly been a valuable tool. How do you see this technology evolving?

The technology for external monitoring of this type will continue to evolve as the clients involved for these applications get more and more sophisticated. For example, a user might interact with a single application that includes components from many other external applications and services. The ability for these tools to properly simulate all types of end-user interactions is one of the many challenges. More important is the connection of the external transaction metrics to the internal ones.


Q. Monitoring is one part of the equation, mapping availability and performance makes this data useful. With virtualization playing such a big part of datacenters today, how do you see tools adapt to meet the challenges of portable and dynamic workloads?

The most important element of monitoring in these types of environments is visibility into all layers of the infrastructure and the ability to correlate information. Driving efficiency in dynamic workload scenarios like on-premise virtualization or infrastructure services like Amazon EC2 requires information about the performance and state of the various layers of the application. Providing that level of visibility has been a big design objective of Hyperic HQ from the beginning and it’s helped our customers do very cool things with their infrastructure.

Q. How do you see monitoring and IT service management evolve as cloud computing becomes more pervasive?

Cloud computing changes the monitoring and service management world in two significant ways. First, the end user of cloud environments is primarily a developer who is now directly responsible for building, deploying, and managing his or her application. This might change over time, but I’m pretty sure that regardless of the outcome, Web and IT operations roles will be changed dramatically by this platform. Second, this new “owner” of the cloud application is trapped between two SLAs: an SLA he provides to his end user and an SLA that is provided by the cloud to him. Cloudstatus.com is designed to help people address this problem.

Q. Do you see SaaS model reemerging for the delivery of monitoring tools, where customers will use hosted monitoring solutions?

Yes, but it will be significantly different from the types of SaaS based management solutions that were built in the past. The architecture of the cloud is the primary enabler for a monitoring solution that, like the platform that powers it, is consumed as a service.

Friday, February 27, 2009

Making the Best Use of Your Security Budget in Lean Times: Four Approaches

Written by Elizabeth Ireland

Many predict 2009 will produce the tightest economic conditions in decades. The subprime meltdown, tight credit markets and recession conditions will mean most CIOs will feel the downward spiral of the economy right where it hurts -- in their IT budgets.

Unfortunately, this also coincides with the most serious threat environment security professionals have faced. Hackers’ tactics are becoming more targeted. The increase in the number and business importance of web applications is generating additional enterprise risk. Budgets may get tight, but your responsibility remains the same: minimize risk.

It’s a tall order in the face of possible spending cutbacks, but because budgets are tight, you have to be focused on how to best reduce risk, and it definitely doesn’t mean less attention on security. In fact, at times like these, that may be the biggest mistake. The highest levels of an organization are asking their CIOs “how do we know we’re secure?” The only way you will know that is by understanding the risks, better understanding the ROI, and how it fits into not only your other IT priorities, but also adds to the company’s bottom line. Defending the security budget is always a challenge, but here are four approaches that can help.

1. Metrics make the most compelling argument. Ask yourself this question: Is your security risk going up or down over time and what is impacting it? This is baseline data that every organization needs and should be on track to monitor. If you cannot answer this clearly, realign your projects and priorities to make sure you can get this information on an ongoing basis. Every CIO should know at least three things: how vulnerable are my systems, how safely configured are my systems, and are we prioritizing the security of the highest value assets to the business? Though security metrics are in the early days of development and adoption, the industry is maturing and solid measurements are available. These areas can be assessed and assigned an objective numeric score, allowing you to set your company’s own risk tolerance and use that to make critical decisions about where to allocate funds. As you face increased budget scrutiny, the metrics allow you to identify – and defend as necessary-- where your security priorities are, and how security and risk fit into overall ROI.

2. Compare your baseline to others in your industry. The guarded nature of security data means CIOs trying to access this type of information will have to get creative. A good place to start is the Center for Internet Security -- their consensus baseline configurations can be used as a jumping off point to identify areas of risk. Vertical industry benchmarks will be an evolving area, and another source may be what you can learn from your personal relationships. Seek out others within your industry and find out what metrics they are using and what they are spending as a percentage of their IT budget. Risk tolerance is specific to each organization, but there are similarities within industries that could prove to be helpful.

3. Learn from other areas in your company. Many process-oriented disciplines can be a good area as a proxy for the type of evolution facing security; network operations are a good example. In the early days of network operations, the only scrutiny came if things weren’t working correctly. Over the years, it has matured to a level of operational metrics for uptime and performance, and is embedded in quarterly and annual performance goals. These metrics allow a continuous cycle of performance, measurement and improvement. In addition, network operations can provide an important lesson of single solution economies of scale. Find solutions that work across your entire enterprise—this is the only way to get economies of scale in implementation and ensure you get the critical enterprise-wide risk information that can deliver the metrics you need.

4. Take steps to automate your compliance process. Are you compliant and can you routinely deliver the reports that auditors request? The economic benefits that come from doing this correctly are significant. Audit costs are directly related to how complicated it is to audit and prove the integrity of a business process, so finding a way to save the auditors’ time is one of the single biggest opportunities to drive down costs. Even though your audit costs may be hitting the finance area’s budget, meet with your company’s finance team to understand what audits are costing you, and how the right kind of automation could lessen them and there will certainly be time and resource savings for the security team as well. There isn’t an exact recipe for compliance automation, so talk to your auditors, look at your environment, and begin the discovery of how much time is spent preparing for and reacting to audits. If you’re a company that allows your divisions to individually automate, it’s time to think about taking those principles enterprise-wide.

Regardless of budget conditions, you will still be faced with decisions on which projects have the biggest impact on the business. The threat environment requires that you make the absolute best decisions with your available budget by investing in the right places and getting better use of your resources. Lastly, remember that times of difficulty are often the times of opportunity. Lessons learned now in the face of tighter budgets can spark valuable models of efficiency and progress for the future.

Thursday, February 26, 2009

Native PCI Express I/O Virtualization in the Data Center

Written by Marek Piekarski


I/O virtualization based on PCI Express® – the standard I/O interconnect in servers today – is an emerging technology that has the capability to address the key issues which limit the growth of the data center today: power and manageability.

Data Centers and Commodity Servers
Commodity servers today trace their ancestry – and unfortunately their architecture – back to the humble personal computer (PC) of the early 1980s. A quarter of century later the ubiquity of the PC has changed the shape of enterprise computing – volume servers today are effectively PCs, albeit with far more powerful CPUs, memory and I/O devices. We now have the acquisition cost and scalability advantages that have come with the high volumes of the PC market, but the business demands on enterprise servers remain much the same as they were in terms of reliability, storage capacity and bandwidth, networking and connectivity – demands that a PC was never intended to address.

Over the last decade the demands for increased performance have been answered by simply providing more and more hardware, but now this trend is proving to no longer be sustainable. In particular, power and management have become the dominant costs of the data center. More hardware is no longer the solution that is needed for growth.

Server architecture – what is I/O?

I/O can be defined as all the components and capabilities which provide the CPU – and ultimately the business application – with data from the outside world, and allow it to communicate with other computers, storage and clients.

The I/O in a typical server consists of the Ethernet network adaptors (NICs), which allow it to communicate with clients and other computers – networked storage adaptors (HBAs), which provide connectivity into shared storage pools, – and local disk storage (DAS) for non-volatile storage of local data, operating systems (OSs) and server “state”. I/O also includes all the cables and networking infrastructure required to interconnect the many servers in a typical data center. Each server has its own private set of I/O components. I/O today can account for as much as half the cost of the server hardware.


Fig 1: Server I/O

I/O Virtualization

Data centers in recent years have been turning to a variety of “virtualization” technologies to ensure that their capital assets are used efficiently. Virtualization is the concept of separating a “function” from the underlying physical hardware. This allows the physical hardware to be pooled and shared across multiple applications, increasing its utilization and its capital efficiency, while maintaining the standard execution model for applications.

Virtualization consists of three distinct steps: Separation of resources – providing management independence; Consolidation into pools – increasing the utilization, saving cost, power and space; Virtualization – emulating the original functions as “virtual” functions to minimize software disruption;

I/O Virtualization (IOV) follows the same concept. Instead of providing each server with dedicated adaptors, cables, network ports and disks, IOV separates the physical I/O from the servers, leaving them as highly compact and space efficient pure compute resources such as 1U servers or server blades.

Fig 2: CPU-I/O Separation

The physical I/O from multiple servers can now be consolidated into an “IOV Appliance”.
Because the I/O components are now shared across many servers, they can be better utilized, and the number of components is significantly reduced when compared to a non-virtualized system. The system becomes more cost, space and power efficient, more reliable, and easier to manage.

Fig 3: I/O Consolidation

The final step is to create “virtual” I/O devices in the servers which look to the server software exactly the same as the original physical I/O devices. This functional transparency preserves the end-users’ huge investment in software: applications, OSs, drivers and management tools.


Fig 4: I/O Virtualization

I/O Virtualization Approaches for Commodity Servers

I/O Virtualization is not new. Like many technologies new to the PC and volume server, it has been in mainframes and high-end servers for many years. Its values are well understood. The challenge has been to bring those values to the high-volume, low-cost commodity server market at an appropriate price point, while not requiring major disruption to end users’ software, processes and infrastructure.

A number of companies have, over recent years, introduced products delivering I/O virtualization based on Infiniband. Although they have delivered many of the advantages of IOV – particularly in data centers which already use Infiniband – their use of Infiniband has limited their attractiveness to the broader market. The cost, complexity, and disruption of introducing new Infiniband software, networks and processes have negated the value of IOV.

The default I/O interconnect in volume servers is PCI Express. The PCI-SIG has recently defined a number of extensions to PCI Express to support I/O virtualization capabilities both within a single server (SingleRoot-IOV) and across multiple servers (MultiRoot-IOV). However, these extensions are not fully transparent with respect to standard PCI Express and require new modified I/O devices and drivers. The requirement for an “ecosystem of components” means that it is likely to be some years before we see MR-IOV, in particular, as a standard capability in a significant range of I/O devices.

Another approach is to virtualize standard PCI Express I/O devices and drivers available in volume today by adding the virtualization capability into the PCI Express fabric rather than into the devices. This has the advantage of exploiting the existing standard hardware and software and being extremely transparent and non-disruptive. Because the virtualization capability is contained in the PCI Express fabric, neither the I/O device nor any of the servers’ software, firmware or hardware needs to change. VirtenSys calls this new approach “Native PCIe Virtualization”.


Fig 5: Comparison of Infiniband IOV, PCI MR-IOV and Native PCIe IOV

Key Features and Benefits of Native PCIe IOV

Hardware cost reduction through consolidation IOV reduces hardware cost by improving on the poor utilization of I/O in most servers today. Native PCIe Virtualization contributes to this cost saving by reusing the existing high volume, low cost PCIe components and by adding very little in the way of new components.

Power reduction Increasing the I/O utilization through consolidation not only minimizes acquisition cost, but also the amount of I/O hardware required and hence the power dissipation of the data center.

Management simplification I/O virtualization changes server configuration from a hands-on, lights-on manual operation involving installation of adaptors, cables and switches to a software operation suitable for remote or automated management. By removing humans from the data center and providing automated validation of configuration changes, data center availability is enhanced. It is estimated that 40 percent of data center outages are due to “human error”.
Dynamic configuration – agility Businesses today need to adapt quickly to change if they wish to prosper. Their IT infrastructure also needs to be agile to support rapidly changing workloads and new applications. I/O virtualization allows servers to be dynamically configured to meet the processing, storage and I/O requirements of new applications in seconds rather than days.

Ease of deployment and non-disruptive integration

Native PCIe IOV technology has been designed specifically to avoid any disruption of existing software, hardware or operational models in data centers. Native PCIe IOV works with – and is invisible to – existing volume servers, I/O adaptors, management tools, OSs and drivers, making its deployment in the data center extremely straightforward.

Rapid and cost effective adoption of new CPU and I/O technologies CPU and I/O technologies have been evolving at different rates. New, more powerful and cost/power effective CPUs typically appear every nine months while new I/O technology generations come only every three – five years. In particular the “performance-per-watt” of new CPUs is significantly higher than those of a few years ago. The separation of I/O from the compute resources in servers (CPU and memory) allows new power efficient CPUs to be introduced quickly without disrupting the I/O subsystems. Similarly, new I/O technologies can be introduced as soon as they are available. Since these new high-cost and high-performance I/O adaptors are shared across multiple servers, their introduction cost can be significantly smoothed when compared with today’s deployment model.

Summary

I/O virtualization is an innovation that allows I/O to be separated, consolidated and virtualized away from the physical confines of a server enclosure.

Of the various approaches described, Infiniband-based IOV is most suitable for installations which already have an Infiniband infrastructure and whose servers already use Infiniband software. For the majority of data centers without Infiniband, IOV based in the standard I/O interconnect, PCI Express, provides a much more acceptable, low- power, low-cost solution. In particular, Native PCIe Virtualization provides today all the benefits of IOV without requiring new I/O devices, drivers, server hardware and software.

VirtenSys I/O Virtualization Switches improve I/O utilization to greater than 80 percent, enhance throughput, and reduce I/O cost and power consumption by more than 60 percent. The products also enhance and simplify data center management by dynamically allocating, sharing, and migrating I/O resources among servers without physical re-configuration or human intervention, dramatically reducing Operational Expense (OpEx).

Wednesday, February 25, 2009

Eight Mobile Technologies to Watch in 2009 and 2010

Gartner has identified eight mobile technologies that will evolve significantly through 2010, impacting short-term mobile strategies and policies.

"All mobile strategies embed assumptions about technology evolution so it?s important to identify the technologies that will evolve quickly in the life span of each strategy," said Nick Jones, vice president and distinguished analyst at Gartner. "The eight mobile technologies that we have pinpointed as ones to watch in 2009 and 2010 will have broad effects and, as such, are likely to pose issues to be addressed by short-term strategies and policies."

Gartner's eight mobile technologies to watch in 2009 and 2010

Bluetooth 3.0 - The Bluetooth 3.0 specification will be released in 2009 (at which point its feature set will be frozen), with devices starting to arrive around 2010. Bluetooth 3.0 will likely include features such as ultra-low-power mode that will enable new devices, such as peripherals and sensors, and new applications, such as health monitoring. Bluetooth originated as a set of protocols operating over a single wireless bearer technology. Bluetooth 3.0 is intended to support three bearers: "classic" Bluetooth, Wi-Fi and ultrawideband (UWB). It's possible that more bearers will be supported in the future. Wi-Fi is likely to be a more important supplementary bearer than UWB in the short term, because of its broad availability. Wi-Fi will allow high-end phones to rapidly transfer large volumes of data.

Mobile User Interfaces (UIs) - UIs have a major effect on device usability and supportability. They will also be an area of intense competition in 2009 and 2010, with manufacturers using UIs to differentiate their handsets and platforms. New and more-diverse UIs will complicate the development and support of business-to-employee (B2E) and business-to-consumer (B2C) applications. Organizations should expect more user demands for support of specific device models driven by interface preferences. Companies should also expect consumer interfaces to drive new expectations of application behavior and performance. Better interfaces will make the mobile Web more accessible on small devices, and will be a better channel to customers and employees.

Location Sensing - Location awareness makes mobile applications more powerful and useful; in the future, location will be a key component of contextual applications. Location sensing will also enhance systems, such as mobile presence and mobile social networking. The growing maturity of on-campus location sensing using Wi-Fi opens up a range of new applications exploiting the location of equipment or people. Organizations delivering business or consumer applications should explore the potential of location sensing; however, exploiting it may create new privacy and security challenges.

802.11n - 802.11n boosts Wi-Fi data rates to between 100 Mbps and 300 Mbps, and the multiple-input, multiple-output technology used by 802.11n offers the potential for better coverage in some situations. 802.11n is likely to be a long-lived standard that will define Wi-Fi performance for several years. High-speed Wi-Fi is desirable to stream media around the home and office. From an organizational perspective, 802.11n is disruptive; it's complex to configure, and is a "rip and replace" technology that requires new access points, new client wireless interfaces, new backbone networks and a new power over Ethernet standard. However, 802.11n is the first Wi-Fi technology to offer performance on a par with the 100 Mbps Ethernet commonly used for wired connections to office PCs. It is, therefore, an enabler for the all-wireless office, and should be considered by companies equipping new offices or replacing older 802.11a/b/g systems in 2009 and 2010.

Display Technologies - Displays constrain many characteristics of both mobile devices and applications. During 2009 and 2010, several new display technologies will impact the marketplace, including active pixel displays, passive displays and pico projectors. Pico projectors enable new mobile use cases (for example, instant presentations projected on a desktop to display information in a brief, face-to-face sales meeting). Battery life improvements are welcome for any user. Good off-axis viewing enables images and information to be shared more easily. Passive displays in devices, such as e-book readers, offer new ways to distribute and consume documents. Display technology will also become an important differentiator and a user selection criterion.

Mobile Web and Widgets - The mobile Web is emerging as a low-cost way to deliver simple mobile applications to a range of devices. It has some limitations that will not be addressed by 2010 (for example, there will be no universal standards for browser access to handset services, such as the camera or GPS). However, the mobile Web offers a compelling total cost of ownership (TCO) advantage over thick-client applications. Widgets (small mobile Web applets) are supported by many mobile browsers, and provide a way to stream simple feeds to handsets and small screens. Mobile Web applications will be a part of most B2C mobile strategies. Thin-client applications are also emerging as a practical solution to on-campus enterprise applications using Wi-Fi or cellular connections.

Cellular Broadband - Wireless broadband exploded in 2008, driven by the availability of technologies such as high-speed downlink packet access and high-speed uplink packet access, combined with attractive pricing from cellular operators. The performance of high-speed packet access (HSPA) provides a megabit or two of bandwidth in uplink and downlink directions, and often more. In many regions, HSPA provides adequate connectivity to replace Wi-Fi "hot spots," and the availability of mature chipsets enables organizations to purchase laptops with built-in cellular modules that provide superior performance to add-on cards or dongles.

Near Field Communication (NFC) - NFC provides a simple and secure way for handsets to communicate over distances of a centimeter or two. NFC is emerging as a leading standard for applications such as mobile payment, with successful trials conducted in several countries. It also has wider applications, such as "touch to exchange information" (for example, to transfer an image from a handset to a digital photo frame, or for a handset to pick up a virtual discount voucher). Gartner does not expect much of the NFC payment or other activities to become common, even by 2010, in mature markets, such as Western Europe and the U.S. NFC is likely to become important sooner in emerging markets, with some deployments starting by 2010. Additional information is available in the Gartner report "Eight Mobile Technologies to Watch in 2009 and 2010." The report is available on Gartner's Web site at http://www.gartner.com/

Tuesday, February 24, 2009

How to Ask for a Resume Critique

Is your resume as good as it could be? Are you happy with the number of calls you’re receiving for job interviews? Is your resume email-ready, optimized for keywords and strategically written to market your best credentials? If you answered no to any of these questions, you would benefit from a third-party resume critique.

Kathy Sweeney, a certified resume writer and president of the nonprofit National Resume Writers’ Association (NRWA), says job seekers can benefit from getting a second opinion on their resumes. “A critique can provide insight into whether the job seeker is using the proper wording for his or her industry and if the document will make a great first impression,” she says.

Whom to Ask

What qualifications should your resume reviewer possess? "I firmly believe that credentials are important,” Sweeney says. She recommends looking for such resume-industry designations as:

Nationally Certified Resume Writer (NCRW), awarded by the NRWA. Certified Professional Resume Writer (CPRW), offered by the Professional Association of Resume Writers. Master Resume Writer (MRW), offered by Career Masters Institute. Sweeney adds that the professional conducting the critique should have reviewed resumes and interviewed candidates themselves. “Unless a professional has experience in the hiring arena or at the very least has networked with hiring managers to learn what they like to see on resumes, it would be hard to provide a valuable critique,” she says.

What to Expect

Sweeney says the reviewer should look at all aspects of the resume, just as a hiring manager would — reviewing it for initial impression, content and how well it stands out from other resumes.

You can sign up for a fee-based or free critique. Here are the differences:

Fee-Based Critiques: These are normally conducted by resume writers and other career-industry professionals. For a fee, your reviewer provides detailed feedback on your resume’s strengths and weaknesses in a written report, telephone consultation or combination. If you sign up for a paid review, find out exactly what you will receive, and request a sample report so you can see the quality of the feedback. Ask if the reviewer will complete a follow-up review after you make the suggested changes to ensure the document is job search-ready. Expect to pay between $25 for a basic critique and $200 or more for a detailed, comprehensive review.

Free Critiques: These can also be helpful but probably won’t be as detailed as a paid review. “A free critique is usually very general,” Sweeney says. “It may provide a synopsis of the reviewer’s overall opinion [of] the resume and potential problem areas. Reviewers may offer strategies on what they would do differently with the resume.” Good resources for free critiques include the Monster Resume Tips message board, hiring managers in your industry and professional resume-writing firms.

Information You Should Provide

Your reviewer needs to know your career goal and industry target to supply useful feedback. “I usually gather information about the job seeker’s target position, and ask how his or her background relates to the position,” Sweeney says. “I also ask job seekers to provide me with a few position postings related to their target job.”

Tell your reviewer about potential problem areas, such as employment gaps, job-hopping or unrelated work history. The more your reviewer knows about your background, the more constructive the feedback can be.

Use What Works, Disregard the Rest

If you ask 10 people to review your resume, you will likely receive 10 different opinions. You may also receive conflicting advice, making it difficult to know what changes you should implement. After you receive a resume critique, be open to suggestions and ready to make revisions that work for you. Pay attention to advice from resume writers and hiring managers, especially those within your target industry. By listening to professionals who know what makes a resume successful, you will be on your way to a successful job search.

Saturday, February 21, 2009

Show Your Skills on Your IT Resume

Employers often screen candidates based on their technical skills, so job seekers naturally want to make sure they present their skills properly. As a result, creating a resume’s skills section can be a challenge.
Typical resume issues techies wrestle with include:
- Whether to list skills alphabetically or in order of importance.
- Whether to include every skill - but how much detail is too much?
How they can differentiate between expert knowledge and passing familiarity.


Don’t Exaggerate

One recruiter’s advice is simple: Don’t obsess over the skills section to the point of embellishment. "In adding a skills section to their resume, a lot of people have a tendency to exaggerate their level of expertise in various technologies," says Scott Hajer, senior corporate recruiter for Software Architects. "They figure the more keywords, the more exposure."

Such tactics are likely to backfire, especially during a technical interview. "We had a candidate who had a big grid on his resume, listing all the skills he had and rating himself on a scale of 1 to 5 in them," says Hajer. One of the skills was J2EE, with a "3" (for average ability) tagged to it. "When asked to talk about J2EE, he could not even define the term, much less talk about his experience in it," he says.

Some employers provide questionnaires asking candidates to rate themselves on particular skills, but they don’t expect such ratings in a resume’s skills section. Keep things simple. Denote each skill with the number of years’ experience or, if you’re intent on including a rating, with words like novice, intermediate and expert.

Skills and Their Uses

The skills section should be buttressed with job descriptions detailing how those skills have been used in the workplace. For example, a resume listing Java, Oracle and UML in the skills section should describe how those technologies were employed on a particular project. Those details provide employers with genuine insight into the depth of a person’s knowledge and experience with those technologies.

Stay Relevant

Consider these tips:

  • Delete outdated skills or those with no relevance to your job objective.
  • Separate tech skills into familiar categories such as operating systems, networks and programming tools.
  • List skills in the order of their relevance to your job objective, rather than alphabetically.
  • If you’ve only read about it in Computerworld or on News.com, don’t include it.

Resume Organization

Techies may want to place the skills section after the job objective and before the experience section. But there are exceptions. If you’re just starting out, you may want to place a greater emphasis on education and internships. If you’re seeking management or sales positions, you may want to avoid crowding the resume with a list of technical skills. Instead, consider placing the list below the experience section or adding other elements, such as communication abilities and foreign languages, to the skills section.

Here are examples of one job seeker’s technical skills section:

Paragraph Format — the Most Common

Technical Skills
Languages: Java, XML, C, C++, JavaScript, SQL, HTML, UML.
Tools: Borland JBuilder, Sun ONE Studio (Forte), Macromedia Dreamweaver MX, Rational Rose, UltraEdit-32, Borland CBuilder, Oracle SQL Plus.
Operating Systems: Windows (XP, 2000, NT), IBM OS/2 2.0, HP-UX 9.0, DEC VMS 4.1, Unix (Linux and Sun Solaris).

List Format — Gives Employers a Quick Overview

Technical Skills
Languages Tools Operating Systems
Java Borland JBuilder Windows (XP, 2000, NT)
XML Sun ONE Studio (Forte) IBM OS/2 2.0
C Macromedia Dreamweaver HP-UX 9.0
C++ MX DEC VMS 4.1
JavaScript Rational Rose Unix (Linux and Sun Solaris)
SQL UltraEdit-32
HTML Borland CBuilder
UML Oracle SQL Plus

List Format with Years of Experience — Shows Depth

Technical Skills
Web Technologies Dreamweaver, JavaScript, HTML 4-7 years
Languages Java, C, C++, UML 5-8 years

List Format with Years of Experience and Skill Level — More Detail

An alternative is to denote only the years of experience.

Technical Skills
Languages Years’ Experience Skill Level
Java 6 Expert
XML 3 Intermediate
C 6 Expert
C++ 4 Intermediate
JavaScript 6 Expert
SQL 4 Intermediate
HTML 6 Intermediate
UML 2 Novice

Friday, February 20, 2009

The 10 Worst Job Tips Ever

Liz Ryan / Business Week

Nearly every day, someone sends me a bit of astounding job-search advice from a blog or a newsletter. Some of this advice seems to come directly from the planet X-19, and some of it seems to have been made up on the spot. Here are 10 of my favorite pieces of atrocious job-search advice, for you to read and ignore at all costs:


1. Don’t Wrap It Up

The Summary or Objective at the top of your résumé is the wrap-up; It tells the reader, “This person know who s/he is, what s/he’s done, and why it matters.” Your Summary shows off your writing skills, shows that you know what’s salient in your background, and puts a point on the arrow of your résumé. Don’t skip it, no matter who tells you it’s not necessary or important.

2. Tell Us Everything

Another piece of horrendous job search advice tells job-seekers to share as much information as possible. A post-millennium résumé uses up two pages, maximum, when it’s printed. (Academic CVs are another story.) Editing is a business skill, after all—just tell us what’s most noteworthy in your long list of impressive feats.

3. Use Corporatespeak

Any résumé that trumpets “cross-functional facilitation of multi-level teams” is headed straight for the shredder. The worst job-search advice tells us to write our résumés using ponderous corporate boilerplate that sinks a smart person’s résumé like a stone. Please ignore that advice, and write your résumé the way you speak.


4. Don’t Ever Postpone a Phone Screen

A very bad bit of job-search advice says “Whatever you do, don’t ever miss a phone screen! Even if you’re in the shower or or on your way to be the best man at your brother’s wedding, make time for that phone interview!” This is good advice is your job-search philosophy emphasizes groveling. I don’t recommend this approach. Let the would-be phone-screener know that you’re tied up at the moment but would be happy to speak at 7 p.m. on Thursday night, or some other convenient time. Lock in the time during that first call, but don’t contort your life to fit the screener’s schedule.

5. Don’t Bring Up Money

Do bring up money by the second interview, and let the employers know what your salary requirements are before they start getting ideas that perhaps you’re a trust-fund baby and could bring your formidable skills over to XYZ Corp. for a cool $45,000. Set them straight, at the first opportunity.

6. Send Your Resume Via an Online Job Ad or the Company Web Site Only

Successful job-seekers use friends, LinkedIn contacts, and anybody else in their network to locate and reach out to contacts inside a target employer. Playing by the rules often gets your résumé pitched into the abyss at the far end of the e-mail address talent@xyzcorp.com. If you’ve got a way into the decision-maker’s office, use it. Ignore advice that instructs you to send one résumé via the company Web site and wait (and wait, and wait) to hear from them.

7. Never Send a Paper Resume

I’ve been recommending sending snail-mail letters to corporate job-search target contacts for three or four years now, and people tell me it’s working. The response rate is higher, and the approach is friendlier. A surface-mail letter can often get you an interview in a case where an e-mail would get ignored or spam-filtered. One friend of mine sent her surface-mail résumé and cover letter to a major company’s COO in New York, and got a call a week later from a general manager wanting to interview her in Phoenix, where she lives. She showed up at the interview to see her paper letter—yes, her actual, signed letter, on bond paper—and résumé sitting on his desk in Phoenix (probably conveyed via an old-fashioned Inter-Office envelope). An e-mail might have ended up in the COO’s spam folder.

8. Wait For Them to Call You

You can’t wait for companies to call you back. You’ve got to call and follow up on the résumés you’ve sent. If an ad says “no calls,” use your LinkedIn connections to put you in touch with someone who can put in a word with the hiring manager. Don’t sit and wait for the call to come. Your résumé is in a stack with 150 others, and if you don’t push it up the pipeline, no one will.

9. Give Them Everything

Give them your résumé, your cover letter, and your time in a phone-screen or face-to-face interview. But don’t give anyone your list of references until it’s clear that mutual interest to move forward exists (usually after two interviews), and don’t fill out endless tests and questionnaires in the hope of perhaps getting an audience with the Emperor. Let the employers know that you’d be happy to talk (ideally on the phone at first) to see whether your interests and theirs intersect. If there’s a good match, you’ll feel better about sharing more time and energy on whatever tests and exercises they’ve constructed to weed out unsuitable candidates. Maybe.

10. Post Your Resume on Every Job Board

This is the best way in the world to get overexposed and undervalued in the job market. (Exception: If you’re looking for contract or journeyman IT work, it’s a great idea to post your credentials all over.) People will find your LinkedIn profile if they’re looking and if you’ve taken the time to fill it out with pithy details of your background. If you’re not employed, include a headline like “Online Marketer ISO Next Challenge” or “Controller Seeking Company Seeking Controller.” Your résumé posted on a job board is a spam-and-scam magnet and a mark that your network isn’t as robust as it might be. These aren’t the signs you want to put out there. Use your network (vs. the world at large) to help you spread the job-search word.

Thursday, February 19, 2009

Avoid the Top 10 Resume Mistakes

It’s deceptively easy to make mistakes on your resume and exceptionally difficult to repair the damage once an employer gets it. So prevention is critical, especially if you’ve never written one before. Here are the most common pitfalls and how you can avoid them.

1. Typos and Grammatical Errors

Your resume needs to be grammatically perfect. If it isn’t, employers will read between the lines and draw not-so-flattering conclusions about you, like: “This person can’t write,” or “This person obviously doesn’t care.”

2. Lack of Specifics

Employers need to understand what you’ve done and accomplished. For example: Worked with employees in a restaurant setting. Recruited, hired, trained and supervised more than 20 employees in a restaurant with $2 million in annual sales. Both of these phrases could describe the same person, but clearly the second one’s details and specifics will more likely grab an employer’s attention.

3. Attempting One Size Fits All

Whenever you try to develop a one-size-fits-all resume to send to all employers, you almost always end up with something employers will toss in the recycle bin. Employers want you to write a resume specifically for them. They expect you to clearly show how and why you fit the position in a specific organization.


4. Highlighting Duties Instead of Accomplishments

It’s easy to slip into a mode where you simply start listing job duties on your resume. For example: Attended group meetings and recorded minutes. Worked with children in a day-care setting. Updated departmental files. Employers, however, don’t care so much about what you’ve done as what you’ve accomplished in your various activities. They’re looking for statements more like these: Used laptop computer to record weekly meeting minutes and compiled them in a Microsoft Word-based file for future organizational reference. Developed three daily activities for preschool-age children and prepared them for a 10-minute holiday program performance. Reorganized 10 years’ worth of unwieldy files, making them easily accessible to department members.

5. Going on Too Long or Cutting Things Too Short

Despite what you may read or hear, there are no real rules governing the length of your resume. Why? Because human beings, who have different preferences and expectations where resumes are concerned, will be reading it. That doesn’t mean you should start sending out five-page resumes, of course. Generally speaking, you usually need to limit yourself to a maximum of two pages. But don’t feel you have to use two pages if one will do. Conversely, don’t cut the meat out of your resume simply to make it conform to an arbitrary one-page standard.

6. A Bad Objective

Employers do read your resume’s objective statement, but too often they plow through vague pufferies like, “Seeking a challenging position that offers professional growth.” Give employers something specific and, more importantly, something that focuses on their needs as well as your own. Example: “A challenging entry-level marketing position that allows me to contribute my skills and experience in fund-raising for nonprofits.”

7. No Action Verbs

Avoid using phrases like “responsible for.” Instead, use action verbs: “Resolved user questions as part of an IT help desk serving 4,000 students and staff.”

8. Leaving Off Important Information

You may be tempted, for example, to eliminate mention of the jobs you’ve taken to earn extra money for school. Typically, however, the soft skills you’ve gained from these experiences (e.g., work ethic, time management) are more important to employers than you might think.

9. Visually Too Busy

If your resume is wall-to-wall text featuring five different fonts, it will most likely give the employer a headache. So show your resume to several other people before sending it out. Do they find it visually attractive? If what you have is hard on the eyes, revise.

10. Incorrect Contact Information

I once worked with a student whose resume seemed incredibly strong, but he wasn’t getting any bites from employers. So one day, I jokingly asked him if the phone number he’d listed on his resume was correct. It wasn’t. Once he changed it, he started getting the calls he’d been expecting. Moral of the story: Double-check even the most minute, taken-for-granted details — sooner rather than later.

Wednesday, February 18, 2009

Wireless Certifications

Wireless certifications are currently being offered by 5 organisations:

Planet3 Wireless offer the Certified Wireless Network Professional (
CWNP) program.

SANS offer the Global Information Assurance Certification (GIAC) Assessing Wireless Networks or
GAWN

OSSTMM offer the OSSTMM Wireless Security Expert (
OWSE) certification

ThinkSECURE offer the Organisational Systems Wireless Auditor (
OSWA) and Open Source Wireless Integration Security Professional (OSWISP) certifications.

Cisco offer the
Cisco Wireless LAN Specialists program.

AreTec offer the AreTec Wireless Career Certifications (
AceWP) program.

NARTE offer the
Wireless System Installers Certification program.



Certification homepage: http://www.cwnp.com/

Planet 3 currently offer 5 vendor-neutral wireless certifications:
Wireless# an the entry-level wireless certification for the IT industry.

Certified Wireless Network Administrator (CWNA) is a foundation level wireless LAN certification for the CWNP Program.

Certified Wireless Security Professional (CWSP) designed to give you the knowledge you need to keep hackers out of your wireless network.

Certified Wireless Analysis Professional (CWAP) designed to give you the knowledge to troubleshoot and increase the performance of your wireless network.

Certified Wireless Network Expert (CWNE) designed to give you the skills to administer, install, configure, troubleshoot, and design wireless network systems including: Packet analysis, intrusion detection, performance analysis and advanced design.



Certification homepage:
http://www.giac.org/certifications/security/gawn.php

SANS offer GIAC certifications for all of the major IT Security competencies and the GAWN is their Wireless Security offering. The course to accompany the certification is the GIAC Assessing Wireless Networks (SEC-617) which can be studied for with the
SANS mentor program or covered in one of their global SANS training events.

Official SANS GIAC Certification overview

"The GAWN certification is designed for technologists who need to assess the security of wireless networks. The certification focuses on the different security mechanisms for wireless networks, the tools and techniques used to evaluate and exploit weaknesses, and techniques used to analyse wireless networks." -SANS




Certification homepage: http://www.isecom.org/projects/owse.shtml

"The OWSE certification program is designed for those who want to learn more about the various ways to technically execute a comprehensive and professional wireless security audit within the internationally recognized Open-Source Security Testing Methodology Manual (OSSTMM) framework." -OSSTMM

The OWSE certification exam is a practical examination requiring a total number of 100 responses within the 4 hour examination period.




Certification homepage:
http://securitystartshere.net/page-training-oswisp.htm

ThinkSECURE currently offer the following vendor neutral WLAN certifications:
Organisational Systems Wireless Auditor (OSWA)

Open Source Wireless Integration Security Professional (OSWISP)
"Accredited by the international security institute ISECOM (Institute of Security & Open Methodologies), the OSWiSP™ teaches a vendor-independent approach to practical deployment, auditing and securing of both private and public Wireless Local Area Networks (WLANs) based on the PRACTICAL WIRELESS DEPLOYMENT METHODOLOGY (PWDM), a peer-reviewed, open-source methodology." -ThinkSECURE



Certification homepage: http://www.cisco.com/

Cisco currently offer 3 wireless certifications:
Cisco Wireless LAN Design Specialist: Associated exam: Wireless LAN for System Engineers (642-577 WLANSE)

Cisco Wireless LAN Sales Specialist designed for Cisco device resellers. Associated exam: Wireless LAN for Account Managers (646-102 WLANAM).

Cisco Wireless LAN Support Specialist: Associated exam: Wireless LAN for Field Engineers exam (642-582 WLANFE).




Certification homepage:
http://www.aretechnologies.net/

AreTec currently offer 4 vendor-neutral wireless certifications. All of the AreTec Certifications cover Applications Development, Wireless Telecom Carriers and Networks, Wireless Networking and Security and Wireless Embedded Systems just to an ever increasing level. In order the certifications are:
AreTec Certified Wireless Engineer (ACWE)

AreTec Certified Wireless Developer (ACWD)

AreTec Certified Wireless Architect (ACWA)

Certified AceWP Instructor (CAI): designed for ACE WP program trainers.




Certification homepage:
http://www.narte.org/

NARTE offer 2 wireless installer certifications both designed to certify those who install wireless LAN systems, Bluetooth, UNII devices, AVIS, and unlicensed PCS Systems:
Wireless Installer Engineer

Wireless Installer Technician

Tuesday, February 17, 2009

Standby Generator Maintenance Tips

Written by Rakesh Dogra

Today most data centers handle mission critical operations and processes, therefore it is not feasible to shut them down even for a short time duration which means that power needs to be available continuously and this demand for power is increasing by the day. Of course the exact availability of the external grid power depends on the place where the data center is located.
For example in most of the developed world the chances of a power outage may be less as compared to several locations in the developing world, yet however reliable the external source may be, it is beyond the direct control of data center management.


Backup Plan & Standby Generators

It is always advisable to have a back up power plan for any data center. Normally this arrangement is in the form of a battery back up and UPS systems, but these can only be a short term solution for a few minutes at the most. Diesel generators are the most common and useful piece of machinery which can help to generator power for hours together till the main supply is back on track.

Since these standby generators are so important, and since they are not running continuously, you certainly need to ensure that they start whenever required (which typically is an automatic process when the grid power fails). Hence certain generator maintenance tips need to be followed by the personnel incharge of generator maintenance.


Generator Maintenance Tips

Lubricating oil is the life blood of the generator or any engine for that matter and it must be ensured that the lubricating oil level is kept upto the mark. There is normally a dip-stick arrangement to check for the lubricating oil level. Normally the level is checked after taking out the dipstick, cleaning it and inserting it again to check the level rather than just taking out and reading it. This is similar to checking the oil in your vehicle.

Apart from the level care should be taken to ensure that oil is changed at the intervals set by the manufacturer. Visual inspection should give clues whether the oil is a bit too dirty and needs to be changed even if the running hours or time limit hasn’t expired and this comes naturally with experience. When replacing or replenishing the oil, only use oil of the recommended grade as that has been suggested keeping various parameters in mind and just putting in “any oil” is not good practice as that could result in serious damage to the engine. Lubricating oil testing kits are available commercially which can tell you whether the oil is fit for use based on certain parameters and it does not require specialist knowledge to perform these tests.

Similarly other routine checks and maintenance should be done regarding other components which require regular replacements such as the oil filters, air filters and so forth. Usually the manufacturer’s manual will give you the interval for these tasks in terms of running hours or time frame.

Just remember that even if you do not need your generator every other day, it is good practice to start and run it for some time. This could be everyday if possible or depending on the schedule of the staff. Certain faults might only be noticeable when the generator is actually running and hence regular starting would ensure that if any fault is present, it is detected at a time when there is no real need of the generator.

When the generator is running, just check and note down a few important readings (provided there is a provision for such readings) and observations.

• Exhaust temperatures
• Jacket cooling water temperature
• Exhaust temperatures
• Lubricating oil temperature
• Abnormal sounds or vibration
• Any smoke or oil leakage

These readings should be within a specified range of values which are spelled out by the company service engineer, manual etc. If recorded in routine, then the information will give a clear indication of any abnormality that may not be otherwise visible.

It should be the duty of a particular person or group of persons (in a sort of rotary fashion) to check the above parameters and keep a log of the same. This can be done in an official record book in a standardized format so that even as the persons taking the readings change; the process is continued unhindered. A good record helps your repair technician to diagnose problems before or after a fault develops.

There are some major maintenance items which cannot be carried out by data center staff that only a specialist can handle such as a complete de-carbonization of the generator.

The part which we have talked about till now is actually the engine part of the generator while the real generation takes place in the alternator which converts rotary power of the engine to electric power. The alternator though requires very little maintenance but care should be taken to ensure that during operation it does not make any abnormal noise or sparks.


Conclusion

The above mentioned tips fall in the category of what can be called the “layman tips” and help to ensure that the generator remains healthy. In case any symptoms or signs are found on the contrary, specialist help should be summoned so that the data center has the capability to face any sudden eventuality in terms of grid power loss. Preventative maintenance can go a long way to ensure your availability.

Monday, February 16, 2009

IT Metrics That Matter to IT and Data Center Professionals

Written by Tsvetanka Stoyanova

IT metrics is a vast topic and there are many methodologies used in theory and fewer of them in practice. IT metrics lie on the crossroad of business and technology. IT metrics measure different aspects of the activity of a company and there are different sets of metrics for different companies.

The need to measure the performance of a data center is obvious. If you can't measure something, you can't manage it. IT metrics provide feedback about the performance of a data center and based on this feedback managerial decisions are made.


What Is the Reason Behind IT Metrics?

The idea of measuring performance is not new to technical people. However, unlike metrics in computing, which though not necessarily precise, manages to capture easily quantifiable values – i.e. downtime measures the time a network is down, or bandwidth measures the width of a channel, IT metrics which are used in business are a bit elusive and subjective. Actually, if there is something more elusive than an IT standard, it must be IT metrics.

Needless to say, when the conclusions reached via applying IT metrics are untrue, this is misleading and IT metrics become of less use. What is more, if the result is very untrue (or fundamentally wrong), this can lead to making the wrong decision and in this case IT metrics are not only useless but they become harmful. Just imagine that you decide to measure downtime based on the number of computers you have – i.e. if you had 100 computers, this would lead to100% uptime. But unfortunately you have only 90 computers, so you can never reach 99.99%, not to mention 100% uptime.

While this example might be pretty lame and extreme, it is possible for a manager to apply IT metrics in that way. IT metrics when misused can be disastrous.


Sample IT Metrics for Use in Data Centers

When used properly, IT metrics can be of great use. The hard part is how to use IT metrics to their advantage.

The first point that needs to be cleared is what a company will measure. Only then comes the question how to do it and which set of IT metrics to choose. According to Forrester analysts, “The key to success is choosing a small number of metrics that are relevant to the business and have the most impact on business outcomes. The five metrics that meet the criteria for relevance and impact are investment alignment to business strategy, business value of IT investments, IT budget balance, service level excellence, and operational excellence. These five metrics should form the core of an IT performance scorecard. ”

The inappropriate choice of IT metrics methodology is the first common problem. You choose a very clever-looking methodology, which looks like the answer to your prayers but it turns out that even if the methodology is not full of technical errors (i.e. measure uptime based on the number of computers), it doesn't measure what you want to measure simply because it is not designed for it.

Another common problem at the stage of choosing an IT metrics methodology is that managers “get greedy” - i.e. to want to have the most comprehensive and complete IT metrics system, which will measure everything and everybody. While such a system might be possible in theory, in practice this approach fails. If you look at the quotation above, Forrester analysts stress that a small number of relevant metrics is what works best. And this is the case no matter what you try to measure.

It is possible to use simultaneously many sets of IT metrics and to measure many aspects of the activity of your data center. For instance, you can measure performance in general, or how green your data center is. One of the best IT metrics set to measure performance in general is the Key Performance Indicators (KPIs) methodology. If you want to learn more about it, the A Hierarchy of Metrics article will give you a basic idea upon which you can expand further.
Green IT metrics are also popular, especially for data centers. There are also sets of IT metrics to measure mainframe performance. Metrics of financial performance are traditional. Actually, there are hundreds of sets and methodologies to measure everything – one piece at a time. Certainly, there is no lack of IT metrics for those who are eager to use them.


IT metrics come in all shapes and sizes. Many organizations, research institutes or even separate companies and consultants develop their own sets of IT metrics. It is impossible to say which one of the many sets is best because even if the methodology is perfect it may not meet your exact needs.

It is not possible to recommend one and only IT metric set which works always and for everybody. The catch is that a good IT metric set should be universal, yet tailored to the needs of the particular company. Only if these conditions are met, will you receive results that can be trusted and used as a basis for decisions.

Saturday, February 14, 2009

IT administrators go ‘rogue’: minimizing the threat from inside

Written by Marc Hudavert

Tough times for the economy often mean that businesses need to look at reducing costs. Typically, a company’s largest overhead will be its staff, but IT managers may want to think twice before shrinking headcount in their department. A recent survey by Cyber-Ark highlighted that 88 per cent of IT administrators would steal passwords and valuable data from the network if they unexpectedly lost their jobs.

This statistic, as concerning as it seems, doesn’t even touch upon the problem of those left behind, simmering in discontent at the sudden increase in workload for no extra pay. What power is being left in the hands of people who could potentially use their knowledge and expertise to wreak havoc on your network?

The city of San Francisco recently experienced the effect of this power at first hand when disgruntled system administrator Terry Childs held the city’s network to ransom by harvesting lists of colleagues’ usernames and passwords; attaching devices to the network that would enable illegal remote access; and creating a super password that gave him exclusive access rights to the IT system which he refused to surrender to police.

With much of the local government email traffic, payroll systems and police department communication conducted over this network, Childs was well aware of the level of control he could wield over his superiors with this kind of information at his fingertips. Not only was it likely to cost the city – and taxpayer - millions of dollars to repair the vulnerabilities in the network, his bosses’ embarrassment was deepening with every minute this sensitive data was being exposed.

So what can companies do to protect themselves from a potential Terry Childs situation? The key is to remember some basic principles that should underpin good working practices at any point in time, and to ensure that the appropriate technology is in place to help maintain the necessary equilibrium between access and control.

Segregation of duty: One of the key recommendations of Sarbanes-Oxley legislation, and a sensible principle for a company of any size or status, is the concept of segregation of duty. Ensuring that no single individual has control over two or more phases of a transaction or operation is a simple method to safeguard against workers undertaking processes from start to finish without being subjected to an internal audit procedure.

Unfortunately, however, the strength of this rule begins to wear down as departmental headcount reduces; fewer bodies are available for the checks to pass through and more responsibilities are loaded onto individual people. This is when IT managers need to be able to deal with administrative tasks as well as managerial responsibilities.

Rather than adopting a hands-off management approach, they need to educate themselves as to the minutiae of the tasks and responsibilities of administrators so in the event of absence, sickness or redundancy, the manager isn’t left in the lurch and has the knowledge and understanding to step into the role when required.

Role-based access: In addition to segregation of duty, it’s important to work to the principle of least privilege. Each individual should only be awarded a level of network access that is essential for them to do their job. These access rights and privileges can be most effectively managed through a centralised system which grants staff access to both buildings and systems, facilitated by the use of smart card technology.

Smart card technology works on the principle of two factor authentication, requiring a form factor (something you have) with something you know (a password or pin number). This means that even if an employee leaves the company without surrendering the physical card, building and system access rights can be instantly revoked, rendering the password – and thus the smart card – invalid. Password management: The use of one-time passwords (OTPs) can help protect the validity of passwords in the authentication process. Ensuring critical passwords are automated to change after each use (as opposed to static passwords) significantly diminishes the risk of rogue administrators harvesting individual log-ins for unauthorised remote access, or using the data to block all users from the network.

By removing the constant need to update, change and respond to forgotten password queries, the use of OTPs also reduces the administrative burden on the IT department. Any solution that minimises the stress and workload of the overstretched IT administrator definitely has to be welcomed.

Hardware: Conduct regular audits of all devices supplied to staff during a period of employment, ensuring no unauthorised equipment is attached to the network or removed from the building without permission. Siphoning data from the system to be stored elsewhere is often one of the first signs that an administrator is planning to operate below the radar.

Taming the rogue: Of course, it’s not 100 per cent possible to safeguard completely against the wrath of the IT administrator scorned. A clever individual with highly tuned technical abilities and a resentful nature will always find a way to get round the system. However, with the right operational policies and effective management technologies in place, there’s no reason why an equally clever IT manager can’t make it that bit more difficult for the rogues to try.

Friday, February 13, 2009

Can Computing Get a Boost from the Poor Economy

Written by Tsvetanka Stoyanova

The poor economy is a topic one can't escape from because many of the headlines of any major media are dominated by recession-related topics and most of the reports are gloomy. The recession has hit all branches of the industry and for some of them the damages are devastating.
When one has such a picture as a background, it sounds strange to think that there might be sectors which will prosper thanks to the poor state of the economy but the case with cloud computing looks exactly like that.


Cloud Computing – the Bright Side of Life? Cloud computing might not reach the rocket heights just because of the poor state of the economy but still the forecasts for its development are pretty bright. Cloud computing has been on the rise for some time and its rise is expected to continue, though the reasons for that are very different from, for example, the exponential growth of real estate a couple of years ago. Certainly cloud computing is not a hype and its success has a solid economic base.

It is not precise to say that cloud computing is recession-proof, but the expectations are high, no matter if the recession gets even uglier. According to IDC experts: “Over the next five years, IDC expects spending on IT cloud services to grow almost threefold, reaching $42 billion by 2012 and accounting for some 9% of total software sales. More importantly, spending on cloud computing will accelerate throughout the forecast period, capturing 25% of IT spending growth in 2012 and nearly a third of growth the following year. ” If you don't call this forecast bright, I don't know what else could sound better in these tough economic times.

In addition to industry analysts, people from the data center sector are also reporting that cloud computing is on the rise. The growth in the example below is pretty steep and chances are that the positive developments will persist in the future.

Why Cloud Computing will have success If you are familiar with what cloud computing is (according to the definition of webopedia, cloud computing is: “A type of computing, comparable to grid computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications. ”), then it is hardly a surprise that it can benefit from the poor economy.

There is only one reason why cloud computing will benefit from the poor economy: savings. In comparison to getting a dedicated in-house deployment, cloud computing is much cheaper, could be more reliable (unless you choose the most amateurish provider), and gives you the chance to see if the software you are using is what you need without the need to pay for its full license. Cloud computing allows you to start new projects without substantial expenses (i.e. you don't have to buy the equipment, you will rent it – but you will still be using it) and this is a really tangible benefit especially at times when IT budgets are shrinking.

On the other hand, when the economy was healthier, this also boosted cloud computing but the reason was not lack of money – rather it was ease of use and maybe even curiosity. Companies wanted to give a try to new applications and cloud computing were the easier option to try an application before you buy it and deploy it in-house. Even companies which have been using cloud computing as a temporary option only, will most likely stay with their provider because now is not the moment to invest in new in-house solutions.

Web 2.0 is going full throttle which provides another reason why companies will need cloud computing services. With all the traffic and storage requirements a Web 2.0 application has, it is a safe bet that many companies, which want to host Web 2.0 applications for internal or external use, will need a place to do it. Hosting Web 2.0 applications in-house is not that easy and that's why many companies will opt to use the services of cloud service providers, who are pros in dealing with the intricacies of deployment and administration of Web 2.0 applications.

Is Cloud computing a safe bet? All the above sounds wonderful and the forecasts about the growth of cloud computing is positive, but if you plan to expand your capacity in order to accommodate for more cloud computing clients, think twice before doing it. Develop an in-depth analysis before jumping on the “cloud” because you may be making an investment that does not meet your requirements.

Recent Posts