tag:blogger.com,1999:blog-62737872870304076812024-03-14T09:19:53.998+05:00Free LibraryResource to free stuff like articles, books, etc. on the netUnknownnoreply@blogger.comBlogger91125tag:blogger.com,1999:blog-6273787287030407681.post-50059552644504876702011-09-30T11:58:00.001+05:002011-09-30T11:58:42.300+05:00Is Cloud Right For You?<p> <br /><b><strong><em>Is Cloud Right for You? Focusing on Fundamentals and Shedding the Hype…</em></strong></b></p> <p>There continues to be huge hype in the press and from analyst houses regarding the “power of the Cloud” and how the Cloud will “solve a plethora of IT challenges” faced by data center professionals. <br />These silver bullet statements are interesting because the challenges facing data center professionals have not changed  over the past decade:  Availability (outages, redundancy, human error), Efficiency (centralized IT access, minimal energy requirements, TCO), Capacity (inventory control, planning, modeling) and Compliance (control, log and physical access, energy standards). <br /> <br /><strong>The Real Question … Will Cloud Help Me Solve Any of These Problems??</strong> <br />Maybe.  But there are so many non-Cloud solution options.  To explain, let’s first define what Cloud is: <br />Cloud  is a pool of computing capacity, public or private, that can be provisioned on demand by end users. This pool can expand or contract based on need and can be measured by the capacity used. <br />Or put another way:  Cloud is just another computing methodology. <br /> <br />But that dull definition will never be embraced by anyone. Why? Because we are in the middle of “TheCloud Perfect Storm,” a convergence of events—from enhancements in virtual infrastructure to the rise of social media and the continued improvements in networking –that have forced firms to consider Cloud asthe only answer. <br /> <br />So is Cloud right for you? Just apply these “Five Rules for Determining if Cloud is Right for You.”</p> <p><strong>Rule 1: Don’t Buy into the Hype</strong> <br />In 2010, CRN published a list of cloud predictions such as “2010 is really theyear of Platform-as-a-Service,” “Public vs. Private becomes irrelevant,” and “Cloud will truly enable social networking, disaster recovery, WAN optimization.”  Bottom line: Avoid the hype and follow rule #2. <br /> <br /><strong>Rule 2: Rely on Fundamentals</strong> <br />·         Define the problem and the strategic need. <br />·         What is the opportunity or pain that may be addressed by Cloud? <br />·         What is the existing (broken) use case and potential better use case? <br />·         What is the opportunity cost? <br /> <br /><strong>Rule 3: Assess Thyself</strong> <br />·         Do you have the critical infrastructure? <br />·         Do you have the network infrastructure? <br />·         Do you have the server infrastructure? <br />·         Do you have industry standard security? <br />·         Do you have the technical expertise? <br />·         Do you have the human capacity? <br />·         Do you need a partner? <br /> <br /><strong>Rule 4: Assess Your Partner</strong> <br />·         Partner reputation, financial stability <br />·         Partner security capabilities, data, physical . <br />·         Infrastructure / configuration capabilities in relation to your use case. <br />·         SLA’s, back-out costs, penalties <br /> <br /><strong>Rule 5: Leverage Vendors</strong> <br />·         Vendors can dramatically expedite the assessment process… <br />·         Leverage Cloud assessment providers. <br />·         Work with existing vendors to complete self assessment. <br />·         Leverage the Cloud providers to provide ROI, implementation do’s and don’ts and project  management expertise. <br /> <br />Shed the hype, know that Cloud is one of many options to address IT related issues, provide strategic opportunity and focus on the fundamentals to assess any possible solutions..</p> Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-16996809772751910642011-09-28T12:14:00.000+05:002011-09-28T13:23:52.009+05:00Poor power quality and its impact on IT equipment<p> <br />Electricity is to a data center as blood is to human body – both are lifelines of their respective systems. Electricity is the life force which makes a dead data center and its equipment live and functioning. Hence it is important that this life force is of pure quality if the performance of the components is to be continuous and without any possible chances of damage from the same electricity. </p> <p>Yet the power quality of the grid power supplied to a data center may not be always as intended. The power could be of poor quality and this could result in damage to the electrical and electronic equipment of the data center. We will learn more about this phenomenon to follow.</p> <p><strong>What is Power Quality?</strong></p> <p>How to you define the term quality in an intangible entity such as electricity. Well let me tell you that quality in case of electric power means that the power is supplied at the designated voltage, amplitude and frequency without much variation in any of its major parameters. </p> <p>Quality is important since the equipments are designed to work for a specific range of values such as voltage, current and so forth. Of course most equipment can withstand slight variations in these parameters but any substantial change could result in temporary or permanent damage to the equipment, which in turn would damage the data/information and that would ultimately trickle down as financial loss and reputation damage of the organization.</p> <p><strong>Parameters of Quality and their Impact</strong></p> <p>If quality is so important we must understand what all parameters constitute quality and what can be the impact of variation of these beyond a tolerable range, and some of the important parameters are defined below.</p> <p>Voltage: AC power has got an peak voltage and RMS voltage and both these are important parameters. Any abnormal increase or decrease in this voltage is known as swell and dip respectively and both of them are undesirable as this can lead to component failure or burning. Closely related terms are spike, surge and flicker and refer to different patterns and time frames of voltage variation. Electric motors are very much susceptible to damage though such voltage surges and this could lead to overheated windings and failure of winding insulation. Moreover these motors themselves could be a cause for voltage dip to other electronic equipment since they require several times more current during starting than compared to their normal running current, hence the need for proper provision for taking care of the starting current. </p> <p>Radio Frequency Interference or RFI: there can be noise present in the electric supply line which may not be seriously damaging for the electronic equipment but could result in disruption of communication and related errors. This noise results in low signal strength and is closely related to the next factor namely harmonics. </p> <p>Harmonics: these refer to the currents which are in integer multiples of the basic line frequency and the presence of harmonics can cause several faults such as false triggers in the electronic circuits and so on. This factor is not much of an issue in the modern day equipments as most of them have a power factor correction design which is mandatory due to appropriate regulations of the governing bodies. </p> <p><strong>Methods of Improve Power Quality</strong></p> <p>As an end user a customer such as the data center or any other organization for that matter, does not have a direct control over the performance and quality of the power supply given by the utility company. Yet there are several techniques available to check monitor and control the quality of the incoming power supply. </p> <p>One such method is the use of appropriate equipment and paraphernalia to ensure power quality. These equipments could include voltage regulators, surge protectors and so on. These equipments help to isolate the sensitive and costly IT equipment from the power grid by acting as a buffer which absorbs sudden shocks in terms of voltage fluctuation and surges. There are also other equipments such as MCB fuses which trip off whenever there is a possibly dangerous situation so that the equipment is saved from damage. Lightning arrestors are used to prevent power spikes in case of thunderstorms by absorbing the lightning current and passing them to the earth.</p> <p>Power monitoring should be carried out and data should be stored for long term analysis and this gives a fairly good indication of the level and status of the quality of power available at the grid in a given area over a period of time. This monitoring could give useful advantage to the management in order to predict certain times when power quality is least so that appropriate measures and steps can be taken to ensure minimizing its impact.</p> <p>Poor power quality may not seem to be a big problem at the first instance but surveys have revealed that in the United States alone, nearly 150 billion dollars were lost directly or indirectly as a result of failed equipment and lost data due to power supply of poor quality. This figure should be sufficient to give an idea about the seriousness of the problem and the need for adopting appropriate remedial measures by the data center managements. </p> <p>Moreover it has been observed that the reliability provided by a typical power grid is much less than the reliability required of a typical data center and hence there must be provision with the data center to cope up with this difference in reliability, the data center needs to invest in the appropriate equipment and arrangements to deal with power quality and power failure either in the form of a blackout of brownout. The data and critical operations handled by a data center are certainly far more important not only for the customers but also for the data center itself for its long term sustenance and reputation. </p> Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-79712207691840010222009-03-25T06:07:00.002+05:002009-03-25T06:07:00.908+05:00IT Service Management - Metrics<span style="font-family:verdana;font-size:85%;color:#c0c0c0;"><em>Written by Tsvetanka Stoyanova</em></span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Metrics and the other ways to measure performance are very popular among technical people. Almost every aspect of a computer’s performance can be and is measured, however when it comes to service metrics for IT personnel and organizations this is one area that companies should pay close attention to.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Computers or machines are easier to measure because there are little to no subjective factors. But with organizations, and especially with people, the subjective factor becomes more and more important and frequently, even if the best methodology is used, the results obtained from metrics are, to put in mildly, questionable.</span><br /><span style="font-family:verdana;"></span><br /><strong><span style="font-family:verdana;">Who Needs IT Service Management Metrics</span></strong><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Metrics are used in management because they are useful. Metrics are not applied just out of curiosity but because investors, managers and clients need the data.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">There is no doubt that metrics are useful only when they are true. I guess you have heard Mark Twain's quote about “lies, damned lies, and statistics” (or in this case – metrics). True metrics are achieved via using reliable methodologies. It is useless just to accumulate data and show it in a pretty graph or in animated slideshow. This might be visually attractive but the practical value of such data is null.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">However, even when the best IT Service Management metrics methodology is used, deviations are inevitable. Therefore, one should know how to read the data obtained from metrics. It is also true that metrics, including IT Service Management metrics, can be used in a manipulative way, so one should be really cautious when he or she reads metrics and above all – when making decisions based on these metrics.</span><br /><span style="font-family:verdana;"></span><br /><strong><span style="font-family:verdana;">Where to Look for IT Service Management Metrics</span></strong><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">There are several metric methodologies in use for IT Service Management, so you can't complain about the lack of choice. Some of these IT Service Management metrics methodologies have been borrowed (with or without adaptation) from other industries, while others have been specifically designed for IT Service Management.<br />Many organizations, including ITIL and ITSM regularly publish books and reports on IT Service Management and even though these are not the only organizations, which define the de-facto standards for IT Service Management metrics, there books and reports are among the top authorities in the field. A short abstract from the “Metrics for IT Service Management” book by Peter Brooks can be found here: The sample shows the TOC and includes the first couple of chapters, so if you have the time to read it, it should give you a more in-depth idea of what IT Service Management metrics are and how to use them.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">In addition to the general metrics for IT Service Management, there are sets of metrics for the different areas of IT Service, such as configuration management, change management, etc. Therefore, if you are interested in measuring only a particular subarea of your IT services, you don't have to go through the whole set of IT metrics just to get the information for the area in question. Many IT consulting companies have also developed benchmarking and other methodologies that measure IT Service Management and these documents are also useful.<br />In addition to ITIL, ITSM, and the various consulting companies, another place where you can get IT Service Management metrics ideas from are the sites and the marketing materials (i.e. white papers) of vendors of software products for IT Service Management. Some of these vendors implement the metrics of other organizations. This is why IT Service Management metrics are often similar and sometimes they are just the same set but from a different angle, which of course can lead to different results.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">There are many vendors that you can access by conducting a search on your search engine. Whenever possible get a trial version (if the vendor offers one), give it a test run and decide for yourself if what you got is what you need. As I already mentioned, IT Service Management Metrics are only useful when true. That is why you will hardly want to waste your (and your employees') time and money on a set of IT Service Management metrics, which are not applicable in your situation.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">With so many metrics that lead to so many different results in the same situation, one sometimes wonders if IT Service Management metrics do actually measure one and the same thing and if they are of any good, Yes, IT Service Management metrics are useful but only when used properly.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-69804051601323569622009-03-24T06:03:00.002+05:002009-03-24T06:03:00.619+05:00What can log data do for you?<span style="font-family:verdana;">Written by Lagis Zavros</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Organizations today are deploying a variety of security solutions to counter the ever increasing threat to their email and Internet investments. Often, the emergence of new threats spawns solutions by different companies with a niche or a specialty for that specific threat - whether it is a guard against viruses, spam, intrusion detection, Spyware, data leakage or any of the other segments within the security landscape.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">This heterogeneous security environment means that there has been a proliferation of log data generated by the various systems or devices. As the number of different log formats increases coupled with the sheer volume of log data, the more difficult it becomes for organizations to turn this data into meaningful business information.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Transforming data into information means that you know the “who, what, when, where, and how” - giving you the ability to make informed business decisions. There is no point capturing data if you do not use it to improve aspects of your business. Reducing recreational web browsing, improving network performance, and enhancing security, are just a few outcomes that can be achieved using information from regular log file analysis.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">To achieve these outcomes, it is important for organizations to have a log management process in place with clear policies and procedures and also be equipped with the appropriate tools that can take care of the ongoing monitoring, analysis and reporting of these logs.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Having tools that are only used when a major problem has occurred only gives you half the benefit. Regular reporting is required in order to be pro-active and track patterns or behaviours that could lead to a major breach of policy or impact mission critical systems.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">10 tips to help organizations get started with an effective proactive logging and reporting system:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">1. Establish Acceptable Usage Polices</span><br /><span style="font-family:verdana;">Establish policies around the use of the Internet and email and make staff aware that you are monitoring and reporting on usage. This alone is an effective step towards reducing inappropriate usage, but if it’s not backed by actual reporting, employees will soon learn what they can get away with.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">2. Establish Your Reporting Requirements</span><br /><span style="font-family:verdana;">Gather information on what you want to report and analyse. Ensure this supports your obligations under any laws or regulations relevant to your industry or geography.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">3. Establish Reporting Priorities</span><br /><span style="font-family:verdana;">Establish priorities and goals based on your organization’s risk management policies. What are the most important security events that you need to be alerted to?</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">4. Research your existing logging capabilities</span><br /><span style="font-family:verdana;">Research the logging capabilities of the devices on your network such as proxy servers, firewalls, routers and email servers and ensure they are producing an audit log or event log of activity.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">5. Address shortfalls between your reporting requirements and log data</span><br /><span style="font-family:verdana;">Open each log file to get a feel for what information is captured and identify any shortfalls with your reporting requirements. Address any shortfalls by adjusting the logging configuration or implementing an independent logging tool such as WebSpy Sentinel.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">6. Establish Log Management Procedures Establish and maintain the infrastructure and administration for capturing, transmitting, storing and archiving or destroying log data. Remember that archiving reports may not be enough as sometimes you may be required to go back and extract from the raw data.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Ensure data is kept for an appropriate period of time after each reporting cycle and that the raw data related to important events is securely archived.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">7. Evaluate and decide on a Log File Analysis Product</span><br /><span style="font-family:verdana;">Evaluate log file analysis and reporting products such as WebSpy Vantage to make sure your log formats are supported, your reporting requirements are met and that it is capable of automated ongoing reporting.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Ensure it can be used by business users as well as specialist IT staff, removing the dependence on these busy and critical staff members. Make sure the vendor is willing to work with you to derive value from your log data. Often a vendor that supports many different log formats will have some insight that may help you in obtaining valuable information from your environment.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">8. Establish Standard Reporting Procedures</span><br /><span style="font-family:verdana;">Once a report product has been decided on, establish how regularly reports should be created, who is responsible for creating them, and who is able to view them. Store user reports in a secure location to ensure confidentiality is maintained.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">9. Assign Responsibilities</span><br /><span style="font-family:verdana;">Identify roles and responsibilities for taking action on events, remembering that responsibility is not only the security administrator’s domain.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">10. Review and Adapt to Changes Because of the metamorphic nature of the security environment it is important to revisit steps 1-9 regularly and fine tune this process to get the maximum value.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-34861185076157439352009-03-24T06:00:00.000+05:002009-03-24T06:00:00.124+05:00Are you Web2.0 Savvy?<span style="font-family:verdana;">Web2.0 social networking is now a part of our cultural fabric. Once considered a casual pastime for teenagers it has now exploded into “the must do thing” for corporate businesses. The transition from being a teenage e-tool to one that the corporate world is looking at as a must participate tool has come a long way.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Bookstores are filling with the views and angles of authors on the Web2.0 social networking phenomena. One book in particular has caught my attention. “Throwing Sheep in the Boardroom” is a cute title that amply describes the initial perception of social networking’s impact on businesses. A younger generation of professionals that are Web2.0 social mavericks have been integrating their work, marketing, social activities and networking via the social networks and the older crowd occupying the board rooms is just now starting to see its importance.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The book authored by Matthew Fraser and Sounitra Dutta and published by John Wiley & Sons, Ltd., provides a clear picture on the impact that Web2.0 is having on our lives and angles it against the corporate boardroom reluctance to embrace the technology as a tool to harness the benefits of collaborative environments.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Data center and IT professionals are no strangers to working online and for the most part are already engaged in some form in the social network scene. For instance, the growth of blogging has carried many well known IT bloggers into the social networking stardom. Blogging is only a part of the Web2.0 scene that many IT socialites are familiar with.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Small technology social groups formed within networks such as LinkedIn and Facebook have exploded. The desire to connect with others who speak and understand IT has always been around, but now Web2.0 has made it easier for them to do so.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The latest craze gaining publicity has been Twitter. News outlets, politicians and journalists are twittering daily. Twitter (</span><a href="http://www.twitter.com/"><span style="font-family:verdana;">www.twitter.com</span></a><span style="font-family:verdana;">) allows its users to send a 140 character message to the Twitter community (which can be keyword searched) and more specifically directly to your followers.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">News agencies and politicians are using this tool to share with their followers and constituents on the latest updates of their day. This new tool provides those who follow an “insider” view with instant news the moment it happens. Recently CNN reported that during a news conference attendees were frantically twittering on their phones to the Twitter network on what they were seeing, hearing and feeling at the moment it happens.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The social networking craze is and needs to be a part of every marketing manager’s daily routine. If you are not on LinkedIn (the adult version of Facebook) or MySpace, Twitter, ReJaw, Plaxo or countless of other networks worldwide then you are missing an opportunity.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The social networks are a direct connection to a younger generation that have and will continue to influence the IT industry. The social networks can provide you an insider view and opinion on products, services or just about anything. If you want to get the pulse on what is going on then you need to invest some time and submerse yourself in the Web2.0 social networking scene.<br />A quote from the book states, “Web 2.0 tools are becoming powerful platforms for cooperation, collaboration and creativity.”</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">“If you are not embracing the Enterprise 2.0 model, you risk getting left behind,” says Fraser, coauthor along with Dutta of Throwing Sheep in the Boardroom: How online social networking will transform your life, work and world.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">If you are a marketing manager, sales person or follower of any subject this book is a must read to better understand and prepare yourself for the “e-ruptions” that will be created by the Web2.0 social networking revolution. To learn more visit </span><a href="http://www.throwingsheep.com/"><span style="font-family:verdana;">www.throwingsheep.com</span></a><span style="font-family:verdana;"> or purchase a copy at a bookstore, or direct from the publisher by calling 800-225-5945.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-75972494686168794942009-03-23T06:55:00.001+05:002009-03-23T06:55:00.926+05:00Ensuring your data center facility is compliant<em><span style="font-family:verdana;font-size:85%;color:#c0c0c0;">Written by Rakesh Dogra</span></em><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Data centers are increasingly becoming ever more important in literally all walks of business, commerce and industry; having their presence felt in all fronts in these areas. Due to such a prominent place which they are achieving, their impact on the normal activities is increasing as well and any disruptions to these data centers could brings business and commercial activities to a standstill at least temporarily causing huge loss to the company, clients, reputation and most importantly the valuable and often confidential data and/or information which these data centers handle and process.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Therefore governments and regulatory bodies have been increasingly putting data centers under their scanner and there are increasing attempts to put more regulatory mechanisms in place to ensure that the data center comply to certain minimum standards across various platforms. This would help to ensure consistency and uniformity at least on the lower side of quality and efficiency across the entire data center industry.</span><br /><span style="font-family:verdana;"></span><br /><strong><span style="font-family:verdana;">What to Comply with?</span></strong><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Compliance has to be done by adhering to certain benchmarks which are defined as various regulations and directions set forth by the appropriate bodies. As far as data centers are concerned there are various benchmarks to which these centers should comply and these include several such regulations. It must be noted that all regulations may not apply to all types of data centers as we shall see below where some of these regulations have been listed:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Sarbanes Oxley Act – this is a US Federal Act which is applicable to all public companies and does not necessarily apply to privately held companies. This act has various sections which deal with different areas of compliance e.g. sections 302 and 404 are concerned with implementing internal security controls.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">HIPAA - it is basically related to health care services and hence would effect data centers that process information related to hospitals and other medical facilities since this act also covers security of the electronically stored information related to the patients and their medical condition.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Similarly there are several other regulations to which data centers should comply. Some of them deal with safe operation of the electrical equipment while others ensure that safe working practices are followed in all areas of the data center.</span><br /><span style="font-family:verdana;"></span><br /><strong><span style="font-family:verdana;">Ensuring Compliance</span></strong><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">It could be daunting task to comply to various regulations to which a data center is subject. Nevertheless this does not mean to say that there should be any lapse on the part of the management or staff to ignore or take these compliance issues lightly. The first step to ensure compliance would be to find out what all regulations does a data center need to comply to. This is necessary since as already mentioned, all regulations do not necessarily apply to all data center facilities but could vary with the type of data center, location and the services that it provides.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The data center management needs to find out the exact compliance requirements and it can take the help of professional third parties if they are not fully capable of doing such an analysis. Some of the regulations might need compliance at the very initial stages such as laying out the electrical system in compliance with relevant safety standards while others require compliance at later stages of the data center life.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">After the various regulations have been found, the management needs to ensure compliance to every single regulation and take steps necessary to ensure that the data center adheres to the suggested guidelines. Again it might be necessary to take external professional help if the data center is small and short of resources and cannot do this own their own.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">It must be remembered that one of the most important steps to ensure compliance is to ensure that the required documentation and paperwork are upto date, since compliance not only needs to be present in actual workplace but also needs to be documented and recorded for reference and regulatory purposes in order to ensure that everything is as it should be.</span><br /><span style="font-family:verdana;"></span><br /><strong><span style="font-family:verdana;">Procedures and Work Policies</span></strong><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">There should be set procedures for carrying out all the important activities which could otherwise lead to serious damages due to slight negligence or mistakes. Experience has shown that minor human errors are one of the most important causes of failures in data centers which could have been avoided, had the management been little more careful in designing and laying out procedures for work.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">A simple example which confirms this fact is an incident which was reported some time earlier that a data center simply got shut down because an employee pressed the emergency stop switch by mistake which cost the data center a lot of money apart from the loss of clients due to disruption of critical activities.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Laying down procedures is itself an elaborate task which needs to be done after careful consideration and in tune with the preferred practices set out in instruction manuals and other regulatory procedures combined with the experience of the personnel. These procedures are then tested before being accepted as a matter of work policy and then displayed at appropriate places across the data center and also training sessions could be conducted which aim to drill these procedures into the workers. Again this training can either be in-house or can be done by external vendors who cater to such professional training.</span><br /><span style="font-family:verdana;"></span><br /><strong><span style="font-family:verdana;">Summary</span></strong><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Hence we see that running a data center is not only about taking care of the purely technical subject matters but also the data center should comply to various policies, procedures, regulations and guidelines which have been laid out by different authorities relevant to their sphere of influence. The data center management should ensure that the maximum possible regulations are being adhered to so that there is least risk of downtime which is important for the data center industry.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-84648674002221730582009-03-21T06:47:00.002+05:002009-03-21T06:47:00.747+05:00The State of Today’s Data Center: Challenges and Opportunities<span style="font-family:verdana;font-size:85%;color:#c0c0c0;"><em>Written by Marty Ward and Sean Derrington</em></span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Data center managers are caught between a rock and a hard place. They are expected to do more than ever—including protecting rapidly expanding volumes of data and a growing number of mission-critical applications, managing highly complex and wildly heterogeneous environments, meeting more challenging service level agreements (SLAs), and implementing a variety of emerging “green” business initiatives.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">And, they are expected to do it with less than ever—including fewer qualified staff and less-than-robust budgets. In fact, according to the 2008 State of the Data Center survey conducted by Applied Research, reducing costs is by far the highest key objective of data center managers today, followed by improving service levels and improving responsiveness. In other words, IT organizations are indeed laboring to do more with less.</span><br /><p><span style="font-family:verdana;">The good news? A growing number of creative data center managers are using a variety of cost-containment strategies that capitalize on heterogeneity to increase IT efficiency and maximize existing resources while keeping costs under control. At the foundation of these solutions is a single layer of infrastructure software that supports all major applications, databases, processors, and storage and server hardware platforms.</span></p><p><span style="font-family:verdana;">By leveraging various technologies and processes across this infrastructure, IT organizations can better protect information and applications, enhance data center service levels, improve storage and server utilization, manage physical and virtual environments, and drive down capital and operational costs.</span></p><p><span style="font-family:verdana;"><strong>Increasing IT Efficiency</strong></span></p><p><span style="font-family:verdana;">In IT organizations around the world, staffing remains a challenge. According to the State of the Data Center report, 38 percent of organizations are understaffed while only four percent are overstaffed. Moreover, 43 percent of organizations report that finding qualified applications is a very big issue—a problem that is exacerbated when dealing with multiple data centers.</span></p><p><span style="font-family:verdana;">While 45 percent of organizations respond by outsourcing some IT tasks, a number of equally effective alternatives are also available. The most common of these strategies, used by 42 percent of organizations, is to increase automation of routine tasks. This not only reduces costs but also frees IT to address more strategic initiatives.</span></p><p><span style="font-family:verdana;"><strong>Storage Management</strong></span></p><p><span style="font-family:verdana;">A growing number of heterogeneous storage management tools automate daily and repetitive storage tasks, including RAID reconfiguration, defragmentation, file system resizing, and volume resizing. With advanced capabilities such as centralized storage management, online configuration and administration, dynamic storage tiering, dynamic multi-pathing, data migration, and local and remote replication, these solutions enable organizations to reduce both operational and capital costs across the data center.</span></p><p><span style="font-family:verdana;">Furthermore, agentless storage change management tools are emerging that enable a centralized, policy-driven approach to handling storage changes and configuration drift to help reduce operational costs while requiring minimal deployment and ongoing maintenance effort.</span></p><p><span style="font-family:verdana;"><strong>High Availability/Disaster Recovery</strong></span></p><p><span style="font-family:verdana;">High availability solutions such as clustering tools can also streamline efficiency by monitoring the status of applications and automatically moving them to another server in the event of a fault. These high availability solutions detect faults in an application and all its dependent components, then gracefully and automatically shut down the application, restart it on an available server, connecting it to the appropriate storage devices, and resuming normal operations.</span></p><p><span style="font-family:verdana;">For disaster recovery purposes, these clustering tools can be combined with replication technologies to completely automate the process of replication management and application startup without the need for complicated manual recovery procedures involving storage and application administrators. These high availability and disaster recovery solutions also ensure increased administrator efficiency by providing a single tool for managing both physical and virtual environments.</span></p><p><span style="font-family:verdana;"><strong>Data Protection</strong></span></p><p><span style="font-family:verdana;">Next-generation data protection can also be used to reduce the operational costs of protecting and archiving data as well as to meet internal SLAs and external governance requirements. With automated, unified data protection and recovery management tools that are available from a single console and work across a heterogeneous physical and virtual environment, organizations can maximize IT efficiency. A number of these tools provide for additional efficiencies through capabilities such as continuous data protection, advanced recovery of critical applications, data archiving and retention, and service-level management and compliance.</span></p><p><span style="font-family:verdana;"><strong>Maximizing Resources</strong></span></p><p><span style="font-family:verdana;">In addition to containing costs through increased IT efficiency, organizations are also implementing a variety of technology approaches—from virtualization and storage management to high availability tools and “green IT” practices—to make better use of existing hardware resources.</span></p><p><span style="font-family:verdana;"><strong>Virtualization</strong></span></p><p><span style="font-family:verdana;">Server and storage virtualization can be used to improve utilization of existing hardware, thereby obviating the need to buy additional resources. According to the State of the Data Center survey, 31 percent of organizations are using server virtualization and 22 percent are using storage virtualization as part of their cost-containment strategies.</span></p><p><span style="font-family:verdana;">Of course, because virtualization introduces complexity into the IT infrastructure, organizations looking to fully realize the benefits of this technology while driving down capital costs are advised to also implement a management framework that provides architectural flexibility and supports multiple virtualization platforms as well as physical environments.</span></p><p><span style="font-family:verdana;"><strong>Storage Management</strong></span></p><p><span style="font-family:verdana;">While storage capacity continues to grow, storage is often underutilized. To make better use of storage resources, organizations can leverage storage management technologies. Storage resource management (SRM), for example, enables IT to gain the visibility into their storage environment, understand what applications are connected to each storage resource and exactly how much of the storage is actually being used by the application. Once this level of understanding is obtained organizations can make an informed decision about how to reclaim underutilized storage and be used to predict future capacity requirements. 71% of respondents indicated they are exploring SRM solutions.</span></p><p><span style="font-family:verdana;">In addition, thin provisioning can be used to improve storage capacity utilization. Storage arrays enable capacity to be easily allocated to servers on a just-enough and just-in-time basis.</span></p><p><span style="font-family:verdana;"><strong>High Availability</strong></span></p><p><span style="font-family:verdana;">Clustering solutions that support a variety of operating systems, physical and virtual servers, as well as a wide range of heterogeneous hardware configurations provide an effective strategy for maximizing resource utilization. With these solutions, IT can consolidate workloads running on underutilized hardware onto a smaller number of machines.</span></p><p><span style="font-family:verdana;"><strong>Green IT Practices</strong></span></p><p><span style="font-family:verdana;">Among the various strategies for meeting green IT directives are server virtualization and data deduplication. Data deduplication can decrease the overhead associated with holding multiple copies of the same data by identifying common data and reducing copies to a single entity. This, in turn, can have a dramatic impact on the amount of disk storage required for archiving purposes as well as the number of disks required for backup purposes. Seventy percent of respondents indicated they are considering implementing data deduplication in their efforts to maximize storage efficiency.</span></p><p><span style="font-family:verdana;">The challenges data center managers face today will likely continue as they are called upon to help their organizations meet budgetary requirements while delivering critical services with fewer personnel and limited IT resources. By leveraging technologies and processes that increase IT efficiency and maximize existing resources, IT can effectively do more with less now and into the future.</span></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-59877755066502563392009-03-20T06:40:00.002+05:002009-03-20T06:40:00.492+05:00A Security Experts Guide to Web 2.0 Security<span style="font-family:verdana;font-size:85%;color:#c0c0c0;"><em>Written by Roger Thornton & Jennifer Bayuk</em></span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Web 2.0 has made the Web a livelier and friendlier place, with social Web sites, wikis, blogs, mashups and interactive services that are fun as well as useful. There are two Web 2.0 concepts that change the game for CISOs, and that they need to understand.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The first is the introduction of rich client interfaces (AJAX, Adobe/Flex) while the other is a shift to community controlled content as opposed to publisher consumer model. Both have serious security issues.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;"><strong>It’s all good news about Web 2.0, right?</strong></span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Yes, unless you happen to be responsible for securing the Web 2.0 environment for your business or enterprise. Then, you might just lament that we’ve taken the data-rich server model of the 1970’s and grafted it onto the interface-rich client model of the 1980’s and 90’s, giving us more capabilities but also a more complex—and vulnerable—computing environment.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">We have to deal with the problems traditionally encountered using interface-rich clients—viruses, Trojans, man in the middle attacks, eavesdropping, replay attacks, rogue servers and others. And all of these apply to every interface in a Web 2.0 mashup, which could have dozens of clients in one application.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">In addition, the user community has changed from being simply indifferent to being willfully ignorant of the value of information. Users willingly post the most revealing details about their employers and their professional lives (not to mention their personal lives) on MySpace, Facebook, LinkedIn and Twitter—information that is easily available to just about anyone.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The problem is painfully obvious for the security professional: More complexity and openness creates vulnerabilities and opportunities for attack and the release of confidential information. This all results in more headaches for security professionals who have to be vigilant in order to keep their IT environments secure.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;"><strong>What’s a CISO to do?</strong></span><br /><span style="font-family:verdana;">Although some companies have tried all options, you can’t easily write your own browser, isolate your users from the Web, or control everything that happens on their PC desktop. However, there are steps you can take that can seriously improve your odds of winning the battle over Web 2.0 vulnerabilities.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">For community controlled content:</span><br /><span style="font-family:verdana;">1. Educate yourself and your company, developers, vendors and end users about Web 2.0 vulnerabilities. Institute a clearing process for the use and inventory of new Web 2.0 components before they are incorporated into your business environment.</span><br /><span style="font-family:verdana;">2. Segregate users’ network access for those who need and those who don’t need access to social networking sites.</span><br /><span style="font-family:verdana;">3. Establish a policy identifying inappropriate professional topics for public discussion on the Web or through online social services.</span><br /><span style="font-family:verdana;">4. Create desktop policies and filters that block, as much as possible, interactions with unknown and untested software.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">When deploying rich client interfaces:</span><br /><span style="font-family:verdana;">5. Assign a cross-functional team to work with software development and application owners to educate themselves on the risks of incorporating Web 2.0 components into applications. Have your own developers recognize and control the use of potentially vulnerable tools such as ActiveX and JavaScript.</span><br /><span style="font-family:verdana;">6. Require your vendors to meet secure coding standards.</span><br /><span style="font-family:verdana;">7. Vigorously stay on top of vulnerabilities and exploits. Use your Web 2.0 inventory to establish a quick response plan to mitigate software as issues arise.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-19504564249732018552009-03-19T07:31:00.002+05:002009-03-19T07:31:00.926+05:00Top Ten Data Security Best Practices<span style="font-family:verdana;font-size:85%;color:#c0c0c0;"><em>Written by Gordon Rapkin</em></span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">1: Don’t narrow security focus during economic downturns</span><br /><span style="font-family:verdana;">When IT budgets are slashed it’s tempting to concentrate only on achieving compliance with regulatory requirements in order to avoid fines, other sanctions and bad publicity.</span><br /><span style="font-family:verdana;"><br />The problem is that centring security solely on meeting the bare minimums required to be in compliance ensures that critical data is not secured as comprehensively as it should be. Gambling with data security in a downturn is a particularly risky business -- financial pressures logically lead to an increased threat level from those who are hoping to profit from purloined data. Companies should, even in difficult times, work towards comprehensive security rather than simple compliance with regulations.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">2: Have a clear picture of enterprise data flow and usage</span><br /><span style="font-family:verdana;">You can't protect data if you don't know where it is. Comprehensive audits typically reveal sensitive personal data tucked away in places that you’d never expect to find it, unprotected in applications and databases across the network. Conduct a full audit of the entire system and identify all the points and places where sensitive data is processed and stored. Only after you know where the data goes and lives, can you can develop a plan to protect it. The plan should address such issues as data retention and disposal, user access, encryption and auditing.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">3: Know your data</span><br /><span style="font-family:verdana;">If the enterprise doesn’t classify data according to its sensitivity and its worth to the organisation it’s likely that too much money is being spent on securing non-critical data. Conduct a data asset valuation considering a variety of criteria including regulatory compliance mandates, application utilisation, access frequency, update cost and competitive vulnerability to arrive at both a value for the data and a ratio for determining appropriate security costs. Specifically gauge the risk associated with employees and how they use the data. If staff are on a minimum wage, transient and/or have low security awareness, the data may be worth more than their pay, so the risk goes up.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Usage also impacts on the level of security required. If the data only exists on isolated systems behind many layers of access control, then the risk may be lower and the security may be more modulated.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">4: Encrypt data end-to-end</span><br /><span style="font-family:verdana;">Best practices dictate that we protect sensitive data at the point of capture, as it's transferred over any network (including internal networks) and when it is at rest. Malicious hackers won’t restrict themselves to attacking only data at rest, they’re quite happy to intercept information at the point of collection, or anywhere in its travels. The sooner encryption of data occurs, the more secure the environment.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">5: Regulation is not a substitute for education</span><br /><span style="font-family:verdana;">Technology controls should certainly be in place to prevent employees from intentionally or mistakenly misusing data. But it’s important that everyone understands the reasons for the data protection measures which are in place. One of the most positive steps an enterprise can make is to institute ongoing security awareness training for all employees to ensure that they understand how to identify confidential information, the importance of protecting data and systems, acceptable use of system resources, email, the company's security policies and procedures, and how to spot scams. People who understand the importance of protecting data and who are given the tools that help them to do so are a great line of defence against malicious hackers. The other side of this coin is that people will always find a way to thwart security measures that they don't understand, or that impact negatively on their productivity.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">6: Unify processes and policies</span><br /><span style="font-family:verdana;">Disparate data protection projects, whether created by design or due to company mergers, almost always result in a hodge-podge of secured and unsecured systems, with some data on some systems encrypted and some not, some systems regularly purged of old data on a monthly basis and others harbouring customer information that should have been deleted years ago. If this is the case within your enterprise, consider developing an enterprise-wide unified plan to manage sensitive data assets with the technologies, policies and procedures that suit the enterprise’s business needs and enable compliance with applicable regulations and standards.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">7: Partner responsibility</span><br /><span style="font-family:verdana;">Virtually all data protection and privacy regulations state that firms can’t share the risk of compliance, which means that if your outsourcing partner fails to protect your company's data, your company is at fault and is liable for any associated penalties or legal actions that might arise from the exposure of that data. Laws concerning data privacy and security vary internationally. To lessen the chance of sensitive data being exposed deliberately or by mistake, you must ensure that the company you are partnering with — offshore or domestic — takes data security seriously and fully understands the regulations that affect your business.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">8: Audit selectively</span><br /><span style="font-family:verdana;">Auditing shouldn’t be a huge data dump of every possible bit of information. To be useful it should be selective. Selective, granular auditing saves time and reduces performance concerns by focusing on sensitive data only. Ideally, the logs should focus on the most useful information for security managers; that is, activity around protected information. Limiting the accumulation of audit logs in this way helps to ensure that all critical security events will be reviewed.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">9: Consider physical security</span><br /><span style="font-family:verdana;">It seems that every week we hear about the laptop that was left behind in a cab, the DVD disks that were found in the rubbish, the unencrypted backup tapes that showed up sans degaussing for sale on eBay, the flash drive that was used to steal thousands of documents, etc. Doors that lock are as important to security as threat intrusion software. Always consider 'what if this ______ was stolen?' No matter how you fill in the blank, the question elicits a strategy for physical security.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">10: Devise value-based data retention policies</span><br /><span style="font-family:verdana;">Retaining sensitive data can be very valuable for analytic, marketing and relationship purposes, provided it is retained in a secure manner. Make sure that stored data is really being used in a way that brings real benefits to your organisation. The more data you save, the more data you have to protect. If securely storing data is costing more than its value to your organisation, it's time to refine your data retention policy.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-50041280920968031492009-03-18T07:35:00.001+05:002009-03-18T07:35:01.002+05:00Seven Things to Improve ITPA implementation<span style="font-family:verdana;"><em><span style="font-size:85%;color:#cccccc;">Written by Travis Greene</span></em><br /><br />What are you doing to prepare for the next big thing in IT management? Alright, that question is a bit unfair. You likely have all you can handle, dealing with projects and ongoing operations, and making it hard to focus on the next big thing.</span><br /><span style="font-family:verdana;"><br />And what is this next big thing anyway? Sounds like vendor hype. While a debate about the next big thing could certainly include topics as diverse as how to manage IT services deployed in the cloud to heterogeneous virtualization management, it would seem appropriate to include the growing movement towards IT Process Automation (ITPA).<br /><br />ITPA is gaining attention because it simultaneously reduces costs and improves IT service quality across a broad range of IT disciplines. In general, automation brings about a reduction in manual labor (the highest cost element in IT management), and reduces the potential for human error (with an associated improvement in service quality and availability). While introducing process manually can boost efficiency, it also has a tendency to increase costs when factoring in the documentation overhead that is required. Whether there is a need to automate simple, discrete tasks or broader cross-discipline processes, ITPA is one of those rare technologies that offers compelling value for both.<br /><br /><strong>Real needs that drive ITPA implementations</strong><br /><br />With any new technology, it helps to have examples of real value that has been obtained by organizations implementing it. In preparing this article, the following organizations contributed ideas. Listed are the needs that initially drove them to adopt ITPA.</span><br /><ul><li><span style="font-family:verdana;">Management Service Provider – Improve efficiency, measured by the ratio of servers to operations personnel, and quality of service delivery for customer-specific processes by automatically handling complex application problems, reducing the chance for human error.</span></li><li><span style="font-family:verdana;">Energy Utility – Perform job scheduling tasks to ensure on-time and consistent execution, as well as reduce the manual labor associated with starting and monitoring jobs.</span></li><li><span style="font-family:verdana;">Financial Services Company – Create user self-service processes to reduce calls and workload at the help desk.</span></li><li><span style="font-family:verdana;">Healthcare Organization – Correlate events from monitoring tools and automate event response to reduce the cost of event management and minimize downtime.</span></li><li><span style="font-family:verdana;">Large US Government Agency – Provision and manage virtual machines to ensure proper authorization and reduce virtual machine sprawl.</span></li><span style="font-family:verdana;"></ul><p><br /><strong>Seven lessons that will improve your ITPA implementation</strong></p><p>Early adopters can reap big benefits (such as the attention of vendors seeking to incubate the technology) but will also make mistakes that prudent organizations will seek to learn and avoid. Fortunately, as ITPA approaches mainstream adoption, these lessons are available from those who have gone before. Here are seven lessons learned from the organizations profiled above.<br /><br />1. Get started with three to five processes that result in quick wins<br /><br />Why three to five? Some may not work out like you planned, or take longer to implement than originally anticipated. Moreover, demonstrating ROI on the ITPA investment will, in most cases, take more than one process. But trying to implement too many processes in the early days of implementation can dilute resources, resulting in delays as well.<br /><br />The ideal processes that qualify as “quick wins” are focused on resolving widely-recognized pains, yet do not require buy-in or integration across multiple groups or tools in the IT organization. Deploying a new technology can expose political fractures in the organization. For all of the organizations listed above, their first processes did not require approvals or use outside of a specific group; yet because they targeted highly visible problem areas, they were able to justify the implementation costs and offer irrefutable evidence (real business justification) for continued deployment of IT Process Automation.<br /><br />2. Identify the follow-on processes to automate before you start<br /><br />While the first three to five processes are critical, it’s a good idea to consider what will be the second act for your ITPA implementation, even before beginning the first. This allows momentum to be maintained, because as the details of implementation consume your attention, the focus will be there rather than on what is ahead, and at some point you will run out of processes ready to be implemented. A better approach is to stimulate demand by showing off the early results and generate a queue of processes to be automated.<br /><br />The risk of not taking this step is that the deployment will stall and ROI will be limited. With ITPA technologies you can generate positive ROI from the first few processes that you automate, however, significant additional ROI can be generated from each new process that is automated, especially once the infrastructure and expertise is already in place. This advice may seem obvious, but almost all of the organizations listed above have fallen into the trap of losing momentum after the initial deployment, which could have been avoided with advanced planning.<br /><br />3. Consider integration requirements, now and in the future<br /><br />ITPA technologies work by controlling and providing data to multiple other tools and technologies. Therefore, the best ITPA tools make the task of integration easy, either through purpose-built adapters or customizable APIs. Obviously, the less customized the better, but consider where the ITPA vendor’s roadmap is going as well. Even if coverage is sufficient today, if you decide to introduce tools from different vendors later, will they be supported?<br /><br />4. Pick the tool that matches both short and long-term requirements<br /><br />Besides the integration requirements listed in lesson #3, the ITPA technology options should be weighed with a long and short-term perspective. For example, some ITPA tools are better at provisioning or configuration management; others are better at handling events as process triggers. While meeting the initial requirements is important, consider which tool will provide the broadest capabilities for future requirements as well, to ensure that additional ROI is continuously achievable.<br /><br />5. Get buy-in from the right stakeholders<br /><br />One challenge commonly seen when selecting an ITPA technology is paralysis through analysis. This is often due to the fact that too broad of a consensus is needed to select a solution. Because ITPA technology is new, there is confusion in the marketplace over how to interpret the difference between vendor products. There is also confusion as to how this new technology can replace legacy investments such as job scheduling tools, to provide IT with greater value and functionality. Early adopters proved that the resulting overly-cautious approach ultimately delayed the ROI that ITPA offers.<br /><br />What may not be as intuitive is the need to obtain buy-in from administrators, who will perceive ITPA as a threat to their jobs and can sabotage efforts to document processes. Involve these stakeholders in the decision process and reassure them that the time they save through automation will be put to better use for the business.<br /><br />6. Dedicate resources to ensure success<br /><br />Personnel are expensive and one of the critical measures of the ROI potential in ITPA is how much manual labor is saved. So it would seem counter-intuitive to promote dedicated resources to building and maintaining automation on an ongoing basis. Yet, without the expertise to building good processes, the ROI potential will diminish.<br /><br />Some organizations have dedicated 20 to 40 percent of a full time employee (one or two days per week) towards working on automating processes, to get started. Once the value has been established, dedicating additional time has proven to be relatively easy to justify.<br /><br />7. Calculate the return on investment with each new process automated<br /><br />Much has been made of ROI in the lessons listed above. In today’s macro-economic environment, a new technology purchase must have demonstrable ROI to be considered. ROI is generally easy to prove with ITPA, which is driving much of the interest. To calculate it, you need data to support the time it takes to manually perform tasks and the cost of that labor time. Then you need to compare that to the percentage of that time that can be saved through automation. Note that very few processes can be 100 percent automated, but there can still be significant value in automating even as little as 50 percent of a process. Perform the ROI analysis on every single process you automate. Each one has its own potential, and collectively, over time, can produce startling results.<br /><br />Proceed with confidence<br /><br />Although it may be new to you, ITPA has been around for years and has reached a level of maturity that is sufficient for most organizations that would normally be risk adverse and unable to invest in leading edge technologies. Taking these lessons (and learning new ones in online communities) is one way to improve your chances for success. Now is the time to enjoy the return on investment that ITPA can offer.<br /><br />About the Author: Travis Greene is Chief Service Management Strategist for NetIQ. As NetIQ’s Chief Service Management Strategist, Travis Greene works directly with customers, industry analysts, partners and others to define service management solutions based on the NetIQ product and service base. After a 10-year career as a US Naval Officer, Greene started in IT as a systems engineer for an application development and hosting firm. He rose rapidly into management and was eventually promoted to the National Director of Data Center Operations, managing four data centers nationwide. In early 2002, a Service Provider hired him to begin experimenting with the ITIL framework to improve service quality. Greene was a key member of the implementation and continuous improvement team, funneling customer feedback into service improvements. Through this experience and formal training, he earned his Manager's Certification in IT Service Management. Having delivered a level of ITIL maturity, Greene had a desire to bring his experience to a broader market and founded a consulting firm, ITSM Navigators, where his team specialized in ITIL implementation consulting for financial corporations. Greene possesses a unique blend of IT operations experience, process design, organizational leadership and technical skills. He fully understands the challenges IT faces in becoming more productive while improving the value of IT services to the business. He is an active member of the IT Service Management Forum (itSMF) and is a regular speaker at Local Interest Groups and national conferences. Greene is Manager Certified in ITIL and holds a BS in Computer Science from the US Naval Academy. </span></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-47543538409018474212009-03-17T07:35:00.002+05:002009-03-17T07:35:00.881+05:00Enterprise Application Skills<span style="font-family:verdana;font-size:85%;color:#c0c0c0;">Written by Tsvetanka Stoyanova</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Enterprise application skills is not new to IT and there are hundreds of thousands, if not millions of IT pros, who specialize in this area. For many people enterprise application skills are just one of the many areas they have some basic knowledge of, while for others, enterprise application skills are the core competency and a life-time career.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">If you belong to the second group, then there is good news for you – the demand for enterprise application skills is not only steady – it is increasing.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Maybe you are asking yourself: should I get more training in that direction? If you don't have a solid background in enterprise applications, it will require plenty of off the job training. Enterprise application technology is hardly something one can learn overnight and it is an area, like many other IT areas, where beginners are not tolerated. There is a demand for experienced enterprise application experts with years of experience. Quite simply, enterprise applications are not a beginner-friendly area, but the demand is high for them.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;"><strong>There Is No Recession for Enterprise Application Skills?</strong></span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">It might sound strange that the demand for enterprise application skills is increasing now, when IT budgets are shrinking and news of layoffs and bankruptcy are flooding from all directions. During the previous recession, in the beginning of this century, networking skills were in demand (at least according to some major industry analysts), now it is enterprise application skills' turn.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The demand for enterprise application skills goes up and down in drastic shifts of demand. Several years ago IDC reported a decrease in the demand for application enterprise skills, while now, as surprising as is might sound, the wave is going up. The fact of the matter is that demand is now up for these skills and the pool is shallow.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">SAP skills have grown between 25% and 30% in value in recent months. Also hot are unified messaging, wireless networking, PHP, XML, Oracle, business intelligence and network security management skills. And over the past year, SANs, VoIP and virtualization posted pay gains.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">SAP has so many products that even if somebody decides to spend his or her life studying them all, it is still impossible, so it is useful to know which of them are the leaders. This article, which deals with payment rises for SAP professionals, sheds some light on the topic which areas are the most lucrative.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">It is obvious that the differences in payment between different SAP products are drastic - starting from SAP Materials Management with a 57.1 percent increase, to the 25% drop for SAP Payroll – so it is certainly not precise to say that all SAP experts are well paid.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Another major category of enterprise application skills, which still sells is Windows enterprise application skills. This is also hardly surprising – Windows is the dominating operating system in enterprises and companies need people to maintain it.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">There is a clear demand for professionals in the enterprise application arena. The demand will continue to increase as other technologies such as cloud computing become widely used as many predict.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-6072104967456051332009-03-14T07:26:00.000+05:002009-03-14T07:26:00.792+05:00Proper Sizing of Your Generator for your Data Center<span style="font-family:verdana;font-size:85%;color:#c0c0c0;">Written by Rakesh Dogra</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Data centers are popping up around the world. These facilities that we rely on to process and store our information are becoming more and more critical every day. The high criticality of these facilities is creating a spike in data center availability requirements.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Of course there are several factors which could lead to a disruption in the services provided by a data center but one main factor is the grid power failure which could bring the entire system to a halt, unless necessary provision is made for back up.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Such a provision exists in the form of flywheels, UPS and battery back up but these are only sufficient for a relatively short duration of time ranging from few seconds to few minutes at the most. Moreover these systems only provide power to the critical IT equipment and not to the secondary systems including cooling. Back up or standby generators are a must if a data center has to ensure long term reliability and provide sufficient back up power which could last a few hours or even a couple of days if circumstances so require.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Selecting the back up generator of the proper size and power rating is of utmost importance to ensure that the generator is able to cope up with the demand when it is actually required from it.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The calculation of the total power required for a data center is basically a simple procedure and involves adding up the power rating of all the equipment which consumes electrical energy. This includes IT and cooling equipment. Of course all the loads may not be working simultaneously during actual operation at all times, but it is always advisable to have a provision to handle peak loads with the generator since it represents the worst case scenario and takes care of the maximum load situation at any given time in case of grid power failure.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">It must also be remembered that though the load requirements for IT related equipment can be found from simple addition of the power ratings of the different equipment, the same is not true about machineries such as electric motors. An electric motor draws a much higher current during the initial starting phase and finally settles down to its normal rated value after it has attained sufficient speed. Hence the total number of motors and their power rating plays an important part in determining generator size.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Provision must be kept for a situation wherein all motors are started simultaneously and hence consume several times more power than their combined rating for their starting period and this is the load which the back up generator should be able to handle without much fuss.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Moreover data centers normally tend to grow in their capacity over a period of time with the growth of the company. This in turn means a rise in the power and cooling requirements of the data center. Of course there may not be a magic formula for calculating the given power requirements for a certain time in future, a rough estimate should be available regarding future expansion based on company plans and industry trends. The generator should be able to cope up with this rise in demand in the future and hence its rating should be somewhat higher than the maximum peak load calculated previously.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Another factor to be kept in mind is that generators consume fuel and the bigger the size, the more that fuel is consumed. Hence an optimum balance also needs to be struck between the generator size requirements and fuel efficiency. For example let us take a hypothetical example in which the power requirement is estimated at 50 KW but the normal load is around say 15 KW. This means that the generator would be running at a much lower load than its rated power which has two disadvantages.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Firstly since most generators are diesel engine operated, their efficiency is quite low at low loads and secondly large amount of fuel will go to waste for the relatively lesser amount of power that is required. For these reasons in actual practice in large modern day data centers, a single generator is not feasible to handle all the power requirements; hence companies use an array of generators which can provide the necessary power. For example Google has installed more than three dozen generators in their Iowa Data Center.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Apart from choosing generators of the right size and capacity it is also important that the generators are kept well maintained and serviced at the appropriate intervals. This interval is either in the form of calendar time or running hours as specified by the manufacturer and this schedule should be strictly adhered to. Routine operation of standby generators is necessary to ensure that they start without problem during an actual emergency and when running, they should be properly monitored for their parameters to get any indication of a possible fault.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Testing of the back up generator also has different levels and though some data centers might be happy with starting them once a week, running for some time and shutting them down, other companies might recommend drills which help to ensure and monitor the real availability of these generators during times of need. The grid power in such a case is deliberately cordoned off so that the reaction of the generators and automatic transfer switches (if present) can be seen. But in actual practice data center managers do not like to follow this practice as they shudder at the thought of a possible loss of availability of their data center.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Hence we have seen that the back up generator is a necessary piece of machinery which every data center should have of the right size and ratings. Their installation helps to ensure that the various services provided to clients are continued interrupted despite any failure in grid power for whatsoever reasons.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-28360349333842801852009-03-13T07:19:00.000+05:002009-03-13T07:19:00.953+05:00Virtual Infrastructure Management<span style="font-family:verdana;font-size:85%;color:#c0c0c0;">Written by Stephen Elliot</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">With its ability to reduce costs and optimize the IT infrastructure by abstracting resources, virtualization has become an increasingly popular tactic for enterprises having to compete in an ever more challenging global economy. According to a recent study CA conducted with 300 CIOs and top IT executives, 64 percent of respondents say they've already invested in virtualization, and the other 36 percent reported that they plan to invest in virtualization.</span><br /><span style="font-family:verdana;"><br />You don't have to look very hard to find the biggest reasons for virtualization's widespread adoption: cost savings and IT agility improvements. Because of the current global economic crisis, CIOs are being asked not only to do more with less, but also to do it with lower headcount while delivering higher IT service levels. By wringing out more performance without adding huge IT infrastructure line items to the budget, virtualization provides the more-bang-for-less-buck solution that organizations are looking for. Increasingly, enterprise IT organizations want to host more critical workloads on virtual machines; however the management risks must be reduced.</span><br /><br /><span style="font-family:verdana;">Respondents to CA's study stated they are also implementing virtualization for technical reasons like easier provisioning and software deployment. But although virtualization brings a tremendous opportunity for IT organizations to compress the processes and cycle times between production and application development teams, drive out more agility in the infrastructure and automate more processes, it brings a lot of complexity to the expertise needed to run the software and the management processes that must be tweaked and adjusted.</span><br /><span style="font-family:verdana;"><br /><strong>The Challenges Facing Virtual Infrastructure Management</strong><br /><br />One major technical issue facing organizations looking to add virtualization to their IT infrastructure is the limitations of system platform tools. As the virtual machine count begins to creep up, platform tools can't provide the amount of granular performance data necessary to give the IT staff a complete picture of what's going on.</span><br /></span><br /><span style="font-family:verdana;">Couple that with the heterogeneous mix of virtualization platforms companies are using and management challenges begin to have an impact on IT's ability to accelerate the deployment of virtual machines. The bottom line is that both platform management and enterprise management solutions are required to deliver an integrated business service view of both physical and virtual environments.</span><br /><br /><span style="font-family:verdana;">Similarly, organizations need the ability to integrate the physical infrastructure with its virtual counterpart in order to automate configuration changes, patch management, server provisioning and resource allocation. The key business service outcomes of this are lower operations costs, improved ROI from virtualization deployments, and an end-to-end view of an IT service.<br />CIOs are also looking at virtualization for more than just cost savings. They're looking for a management solution that will transform their IT organizations and demonstrate success via measurable metrics and key performance indicators, whether they're business processes such as inventory churn and increasing margins, or technical metrics like server-to-admin ratio (or virtual-machine-to-admin ratios), or even a reduction in the number of trouble tickets sent to the service desk. The goal is to deliver business transformation in an ongoing, measurable manner to mitigate the business risk of a growing virtualization deployment.</span><br /><br /><span style="font-family:verdana;"><strong>The Solution: Virtualization as Strategy, Not Just Tactic</strong></span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">All of these challenges point to a common solution that transforms the deployment of virtualization from being an ad hoc cost-savings tactic to a more strategic enterprise platform. Rather than merely increasing the number of virtual machines, IT can take the opportunity to think about how it can get the most out of decompressing the processes between teams, increasing the workflow automation, reducing handoff times, reducing configuration check times and increasing compliance. These are the foundational steps that lead to IT transformation and successful business service outcomes. Without these capabilities, the failure rate of projects and associated costs substantially increase.</span><br /><span style="font-family:verdana;"><br /><strong>Where Virtual Infrastructure Management Is Headed</strong><br /><br />Another reason that viewing virtualization as an enterprise platform is becoming crucial to organizations is that virtual machines are taking on different forms as virtual technology transforms. The management of desktop virtualization is becoming increasingly important as the technology increases in popularity. One particular challenge is the number of different architectures that needs to be taken into consideration for any desktop virtualization solution.</span><br /></span><br /><span style="font-family:verdana;">Likewise, a growing number of organizations are investigating network virtualization. In particular, Cisco's new virtual switch technology, which includes embedded software from VMware, has been making ripples across the IT world.</span><br /><br /><span style="font-family:verdana;">Having an enterprise platform in place makes such new developments in virtualization easier to implement and manage. The better an organization plans for the management, processes and chargeback opportunities virtualization offers, the more IT can lead the business outcome discussion and drive out measurable success.</span><br /><br /><span style="font-family:verdana;">While virtualization has already helped transform data centers, drive consolidation efforts and reduce power and cooling costs, we've just scratched the surface. There's a lot more to go.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-51998547912763240432009-03-12T07:01:00.002+05:002009-03-12T07:01:01.045+05:00How To Become Productive At Work (Part I)<a href="http://blog.rozee.pk/wp-content/uploads/2009/02/perfect1.jpg"><span style="font-family:verdana;"><img style="MARGIN: 0px 0px 10px 10px; WIDTH: 288px; FLOAT: right; HEIGHT: 297px; CURSOR: hand" border="0" alt="" src="http://blog.rozee.pk/wp-content/uploads/2009/02/perfect1.jpg" /></span></a><span style="font-family:verdana;">Each day starts with best of intentions. There are deadlines to meet, essential work to be finished, important business meetings & phone calls and short and long-term projects to be started. As the day comes to a closure and we are wrapping up to leave, we discover that barely a fraction of what we had on our to-do list has been accomplished. As a result we make a mental note to come in early the next day, stay late, and work at weekends as well. Yes, we are busy, but are we productive?</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">A professional is hired for one reason, i.e., he demonstrates the potential to be productive at work. So now you are there in a cubicle, facing a computer with the expectation that you will do something good for the company. Do you feel like being stuck at work sometimes? Would you like to be more productive and feel a greater sense of accomplishment at the end of each day? Well you can. It just takes a desire and commitment to renew your habits and routines.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">A productive environment leads to productive employees. The article below is divided into two parts. This week we will give an insight on why a productive environment is necessary to motivate and make employees industrious at work while next week we will focus on how can employees themselves inculcate productivity in their profession.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">Why is a productive environment necessary?</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Employees produce good results when their managers treat them well and the organization pays special attention to their professional needs. So the question arises: What do most talented, productive employees need from a workplace?<br />Good managers recognize employees as individuals and do not treat everyone at a collective level. They don’t try to “fix” people and their weaknesses; instead, they excel at turning talent into performance. The key to productivity is to make fewer promises to your employees and then strive to keep all of them.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">What does a great workplace look like? Gallup took the challenge and eventually formulated the following questions:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">The Twelve Questions to Measure the Strength of a Workplace:</span><br /><ol><li><span style="font-family:verdana;color:#009900;">Do I know what is expected of me at my job?</span></li><li><span style="font-family:verdana;color:#009900;">Do I have the materials and equipment I need to do my work right?</span></li><li><span style="font-family:verdana;color:#009900;">Do I have the opportunity to do what I do best everyday?</span></li><li><span style="font-family:verdana;color:#009900;">In the past seven days, have I received recognition or praise for doing well?</span></li><li><span style="font-family:verdana;color:#009900;">Does anybody at my job place seem to care about me as a person?</span></li><li><span style="font-family:verdana;color:#009900;">Is there anyone, may it be a supervisor or a colleague, who encourages my development?</span></li><li><span style="font-family:verdana;color:#009900;">Do my opinions seem to count at my workplace?</span></li><li><span style="font-family:verdana;color:#009900;">Does the mission/purpose of my company make me feel that my job is important?</span></li><li><span style="font-family:verdana;color:#009900;">Are my co-workers committed to accomplishing excellence while performing their job responsibilities?</span></li><li><span style="font-family:verdana;color:#009900;">Do I have a best friend at the organization I’m an employee of?</span></li><li><span style="font-family:verdana;color:#009900;">Has someone at work talked to me about my progress in the last six months?</span></li><li><span style="font-family:verdana;color:#009900;">This last year, has my job given me an opportunity to learn and grow?</span></li></ol><span style="font-family:Verdana;"></span><br /><span style="font-family:verdana;">The results yielded that the employees who responded positively to the 12 questions worked in business units with higher levels of productivity, profit, employee retention and customer satisfaction. It was also discovered that it is the employees’ immediate manager, and not the pay, benefits, perks or charismatic corporate leader, who plays the critical role in building a strong workplace. So it implies that people leave managers, not companies. This means that if your relationship with your immediate manager is fractured, no amount of company-sponsored daycare will persuade you to stay and perform.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">Relationship between managers, employees & companies:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">According to the Gallup survey:</span><br /><ul><li><span style="font-family:verdana;">A bad manager can scare away talented employees, hence, draining the company of its power and value. The top executives are often unaware of what is happening down at the frontlines.</span></li><li><span style="font-family:verdana;">An individual achiever may not necessarily be a good manager; companies should take care not to over-promote.</span></li><li><span style="font-family:verdana;">Organizations should hold managers accountable for employees’ response to these 12 questions.</span></li><li><span style="font-family:verdana;">They should also let each manager know what actions to take in order to deserve positive responses from his employees because an employee’s perception of the physical environment is colored by his relationship with his manager.</span></li></ul><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">Bring out the best:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The Great Manager Mantra is: People don’t like to change that much. Don’t waste time trying to put in what is left out. Try to draw out what is left in.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Managers are catalysts:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">As a catalyst, the manager speeds up the reaction between the employee’s talents and the achievement of company’s goals and objectives. In order to warrant positive responses from his employees, a manager must:</span><br /><ol><li><span style="font-family:verdana;">Select a person</span></li><li><span style="font-family:verdana;">Set expectations</span></li><li><span style="font-family:verdana;">Motivate the person</span></li><li><span style="font-family:verdana;">Develop the person</span></li></ol><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">Why does every role, performed at excellence, require talent?</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Great managers define talent as “a recurring pattern of thought, feeling, or behavior that can be productively applied”, or the behavior one finds oneself doing often. The key to excellent performance is: matching the right talent with the required role to be played.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">“Excellence is impossible to achieve without natural talent.”</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Every individual is unique and everyone has his/her own personality accompanied by a dignity and self respect to go with it. Without talent, no amount of new skills or knowledge can help an employee in unanticipated situations. In the words of great managers, every role performed at excellence deserves respect; every role has its own nobility.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">Comfortable environment:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">In today’s competitive corporate world, it is becoming increasingly important to focus on the appearance of the workplace. With a mounting number of people spending more time in their offices, the physical comfort, visual appeal and accessibility of their workplace has gained ever more importance. Wouldn’t it make far better sense to retain valuable employees by making small, yet meaningful, aesthetic adjustments to their work environments?</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Studies have shown that employers, who care about their employees and their work environment, have fashioned more motivated and productive people. There is a strong relationship between motivation and productivity at the workplace. Employees who are inspired will be more diligent, responsible and eventually, more industrious.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">Well lit, airy & clean:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Employees spend 6 to 8 hours at their workplace every day which makes a workplace their second home. It is up to the employers to see and make sure that the office is fully facilitated and is in good working order. It must be well lit and well ventilated with the right amount of lights, fans, air-conditioning. Cleanliness is of utmost importance as there are a huge number of workers working at a job place. The offices, cubicles, rest area, washrooms, kitchen & serving area must be neat and clean. The more comfortable the working environment is more productive will be the employees.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">Safety measures:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">An employer must make sure that he provides a safe environment to his/her employee. The security measures outside office include security guards and parking facility. While inside the office, there must be introduced a safe environment for male and female employees to work so that if an employee has to work late hours she/he should feel safe and comfortable working in his/her office. There must be no discrimination or harassment practiced and the employee should be given equal opportunity to grow as an individual despite being male or female.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">The power of recognition:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Acknowledgment is a powerful motivator. If you praise your employees and acknowledge their efforts they will feel better about themselves and about the hard work they have put in.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">The saga of raise:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Sometime back it was believed that a “salary increase” is the most obvious tool for encouraging employees to work hard. Today several studies have discredited the idea. Employees do not become more productive simply because they are paid more. After all, employees do not calculate the monetary value of every action they perform. Studies show that while a raise makes employees happy, there is an abundance of other things that can accomplish the same thing.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">The power of praise:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">A pat on the shoulder can produce wonders. For effective management, a manager must recognize that fairness and leadership alone cannot inspire his staff to work hard. Deep inside all of us, we crave for being appreciated. Praise is an affirmation that an employee did something right, and every time he receives compliments in the workplace he pushes harder to receive the same avowal the next time around.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;color:#cc0000;">The importance of incentives:</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Incentives even with no monetary value are just as important as praise. Incentive can be categorized as, praise with a physical form. It is actually a reward for a job well done. Managers tend to ignore the importance of non-monetary incentives while these have been found to dramatically increase employee’s sense of worth in relation to actual work accomplished. They could be company logo mugs or shirts or business card holders, no matter what you decide to give to your employees as an incentive, never lose sight of the need to recognize their efforts, whether verbally or through small office gifts.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-2800027459875547102009-03-11T07:50:00.000+05:002009-03-11T07:50:00.477+05:00Desktop Virtualization – Has it hit your desk yet?<span style="font-family:verdana;font-size:85%;color:#c0c0c0;">Written by David Ting</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The discussion on desktop virtualization, or hosted virtual desktop, is heating up. Some view it as futuristic. Others say it is throwback to the world of mainframe computing. With economic concerns forcing businesses to take a hard look at expenses across the enterprise, however, there are many reasons this is such a hot topic.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">In our current cost conscious world, the potential to reduce IT costs are obvious: virtualization significantly reduces the need for idle computing hardware and drastically lowers power consumption - especially in mission critical environments like healthcare where machines need to be on 24 hours a day. Lower power consumption comes from reducing the need to run lightly loaded but high powered CPUs at each desktop and delivering desktop sessions for multiple users from a server that can be heavily loaded. Most importantly, virtualization frees up IT from having to maintain large numbers of desktop systems that are largely user managed. It also eliminates the need to constantly re-image machines that have degraded through common usage. Imagine how many fewer head aches we would have if we could have a new copy of the OS Image everyday - and not have to suffer through the "plaque" build up that slowly kills performance.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">This all sounds good. But, before diving headfirst into the virtualization pool, it's important to realize that the benefits of desktop virtualization also lead to a new security challenges - especially around managing user identities, strong authentication and enforcement of access policies.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">With user identities being relevant in multiple points within the virtual desktop , coordinating and enforcing access policies becomes far more difficult and error prone as all the systems have to be in sync. Since one of the advantages of having virtual desktops is the ability to dynamically create desktops specific to the user's role within the organization, having a centralized way to manage user identities, roles and access (or desktop) policies is critical in this new virtualized environment. Allowing users to only access tailored desktops specific to their role or access location can be tremendously valuable in controlling access to computing resources. Being able to leverage a single location for authenticating users, obtaining desktop access rights and auditing session related information is equally important, if not more so, than what we have in a conventional desktop environment.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">While it is still some time out before adoption becomes common - security capabilities and limitations present a barrier to adoption - we're beginning to see customers who need to address these issues - connecting the user identity with authentication and policy link all the way from the client to the virtualized session and even to the virtualized application.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Desktop virtualization has tremendous promise - however, until we can replicate the user's current experience --and more importantly--make it easier to set and enforce authentication and policy in this environment, there's still work to be done.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-17830990272986780842009-03-10T07:43:00.000+05:002009-03-10T07:43:00.830+05:00Internal Regulations for Data Access<span style="font-family:verdana;font-size:85%;color:#c0c0c0;">Written by Tsvetanka Stoyanova</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Data is very important to every company, organization and government. In our age of computers, data has become as precious as gold. Data wields power, but data misuse can create havoc. Data is difficult to protect. It seems like every month a new media report describes the hacking of data and no one is immune.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The protection around data can appear as solid as steel, but over confidence in your data’s protection is a fool’s path. You can never be 100% sure when your information will be accessed illegitimately.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Take a look at those media reports again and notice that it is not just the small and medium sized companies that are being hacked, but everyone from government agencies to members of the Fortune 500.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Why It Is Important to Have Internal Regulations for Data Access? It is obvious that internal regulations for data access are important. Data misuse is too common to be neglected and it is not the hackers who are to blame, but the lack of a solid security program is also at fault. Data is valuable and if you have data you should take the steps to keep it protected. Data should not only be protected from exterior intruders, but from the interior as well.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Unfortunately, many cases of data theft are inside jobs. Some sources say that 80% of threats come from insiders and 65% of internal threats remained undiscovered! This is scary at best! While you can't suspect that all your employees are criminals, it is mandatory that you have a program in place to monitor internal breaches. In some cases employees are unaware that the information they are gathering is off limits, but in more than half of those cases the employee is unaware of it. It is important to communicate company policies on accessing data to those who have access or a means to easily intrude.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">No company wants to make the headlines or become known for internal data theft, insider trading, or leaks of sensitive information. That's why you need to have internal regulations for data access. Most important, make sure that they are followed without exceptions.<br /></span><br /><span style="font-family:verdana;">Internal Regulations for Data Access<br /></span><br /><span style="font-family:verdana;">Protecting data involves many steps and some of them are described in the following Data Protection Basics article. However, since internal regulations are an extensive subject we'll deal mainly with them here. The rules to define adequate internal regulations for data access are the basis for your data protection efforts.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The main purpose of any internal regulations program for data access is to prevent intentional and unintentional data misuse by your employees. This can be a difficult task. Let us review some steps that you should consider.</span><br /><ul><li><span style="font-family:verdana;">Check all applicable regulations and industry requirements for changes and updates. Keeping an eye out for changes is not enough. You should have a good understanding of what each regulation is asking of you. As an example in Europe, many professionals are utilizing the EU Data Protection Directive. This is a good start; however when looking closer at the Directive it only provides general guidance, but not detailed steps. Detailed steps are provided by individual country regulations.</span></li><li><span style="font-family:verdana;">Make your employees aware of the risks of unauthorized data access. 99% of data center staff is aware that data is gold and won't misuse it unintentionally. The remaining percentage is what you need to be aware of. While in most cases data theft is intentional, there are cases of leakage, when an employee has been fooled by a third party and as obvious as it may seem, you need to make sure that this never happens. I recall a case, when a software developer, who had just started his first full-time job with a company, was tricked by a “friend” to show the source code of one of the products the company was developing. The thief rebranded the stolen source code and launched it as his own product and began competing with the company he robbed.</span></li><li><span style="font-family:verdana;">The minimum privileges rule. In above example, the theft may not have happened if the developer did not have access rights to the source code. It is important to give access sparingly. An employee should only have access to data he or she needs in order to be able to perform his or her daily duties. A process such as this may slow development, but this is tolerable in comparison to losing the information.</span></li><li><span style="font-family:verdana;">Classify your data so that you are aware of what is sensitive. There are degrees of sensitivity that need to be classified. Financial and health records should be at the highest tier. Data classification could be an enormous task but once completed updating is all that remains.</span></li><li><span style="font-family:verdana;">Define primary and secondary access users. It is good practice to assign primary access and then secondary access in the event something happens to the person who has the first tier access.</span></li><li><span style="font-family:verdana;">Physical access. Ensure that your facility has the proper physical security levels. This includes a secure facility with card access entry points, identification badges and security code access to the building.</span></li><li><span style="font-family:verdana;">Access to machines and applications. Physical access includes access to premises and machines but very often one doesn't have to have physical access in order to get hold of sensitive data. You also need to define rules for access to machines and the applications on them. Also, think about backups and virtual machines – don't forget to cover them as well. In some cases access restrictions are limited to some period of time only (for time-sensitive data, which after the critical period has expired becomes publicly available), while in others they are for the entire life cycle of the data.</span></li><li><span style="font-family:verdana;">Be sure to have a policy in place for ex-employees. Remove access requirements and change codes immediately to avoid theft. Be wary of employees who voice negative statements about the company or those who are disgruntled for any reason.</span></li><li><span style="font-family:verdana;">Keep an eye out. As I have mentioned, some sources say that as much as 65% of internal thefts go unnoticed. Keep an eye out for possible violations and investigate them right away.</span></li><li><span style="font-family:verdana;">Know who you should contact in the event that you find or see a data breach.</span></li><li><span style="font-family:verdana;">Create standard operating procedures (SOPs). The National Institute of Standards and Technology (NIST) have published guidelines for bolstering the response capabilities of enterprises.</span></li><li><span style="font-family:verdana;">If hacked, preserve all evidence and have a process in place to do that includes maintaining availability of equipment.<br /></span></li></ul><span style="font-family:verdana;">The above mentioned measures are not an all inclusive list. Whether the investigation is internal or external, computer-based fraud and electronic data theft are extremely serious security issues. Whatever the situation, employ a data breach response plan that preserves evidence, helps catch the criminals, and ensures that the enterprise negates any vulnerabilities.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-78455847101183786172009-03-09T07:39:00.000+05:002009-03-09T07:39:00.417+05:00Fuel Cells and Data Center Backup Power<span style="font-family:verdana;">Everything that moves or works needs energy and the same is true ranging from human beings to inanimate matter in the form of computers and servers. Needless to say the power requirements for a data center necessarily include a continuous supply of power which isn’t interrupted by the elements of weather, climate, power grid or anything under the sun for that matter.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Since the dependence on grid power for 24/7 supply would be bit risky despite the best available infrastructure in any part of the globe, alternative sources of energy have been used to provide back up power in the event of main power failure. These sources include battery back ups, electrical generators etc and have been providing back ups to systems despite certain practical drawbacks of each of these sources. Scientists have been struggling to develop better power back ups and fuel cells offer one such source of back up which would be discussed in this article.<br /></span><br /><span style="font-family:verdana;"><strong>What is a Fuel Cell?</strong><br /></span><br /><span style="font-family:verdana;">Most of us know what fuel means and also what is meant by a cell! But have you ever heard or have you got a clue as to what a fuel cell means? Well, to put it simply in the terms as described by Stanford University , a fuel cell is a “static device that converts the chemical energy in natural gas into electricity and hot water through an electrochemical process”.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">If we go back in time we will discover that although the concept of a fuel cell was demonstrated as long back as 1839 by Sir William Grove, it was not converted into a practically usable device until half a century ago when NASA used fuel cells in her missions.</span><br /><br /><strong><span style="font-family:verdana;">Comparison to a Battery and a Combustion Engine</span></strong><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">People have many questions and doubts regarding the use of fuel cells especially their comparison vis-à-vis batteries and IC engines as source of power. Actually a fuel cell can be said to contain the best of both worlds namely batteries and engines. Batteries produce energy (i.e. electrical energy) from chemical energy via reactions that take place inside the battery without having to carry out combustion of the electrolyte thereby making the process a lot easier and clean. Engines on the other hand produce energy by burning fuel combined with oxygen in the air and producing energy in the process, while releasing a lot of heat and polluting leftovers as well.<br /></span><br /><span style="font-family:verdana;">A fuel cell burns fuel like an engine but does so without the emissions associated with an engine, more like a battery. Most commonly used fuel for fuel cells is hydrogen apart from other substances such as natural gas and methanol.<br /></span><br /><span style="font-family:verdana;">Let me elaborate on a very important point at this stage that fuel cells are currently in their development state only and are not mass produced commercially unlike batteries which are available in plenty. Obviously they are bit costly to find and install at the present times but the situation will certainly become better as their potential for various uses including data center back up is realized and they become more commercially viable in the coming future.<br /></span><br /><strong><span style="font-family:verdana;">Types of Fuel Cells<br /></span></strong><br /><span style="font-family:verdana;">Fuel cells come in various types and are mainly classified based upon the electrolyte that they use. It would not be possible to elaborate on different fuel cells in detail, in this relatively short report but a brief description would suffice the purpose at the moment. Based on the electrolyte description, the various types of fuel cells are as follows;</span><br /><br /><span style="font-family:verdana;">Proton Exchange Membrane fuel cells contain a solid polymer as the electrolyte and offer several advantages such as the absence of corrosive fluids from the cell, better durability. They only use hydrogen and oxygen for operation but then the continuous supply and storage of hydrogen could be problematic especially from the safety point of view. However PEM fuel cells are costly initially due to the presence of platinum catalyst which not only adds to the cost but also gets poisoned easily from CO2 emissions. They are less efficient than other types of fuel cells and their output capacity is limited to a couple of hundred kilowatts of power at the most which make them suitable for smaller sized data centers.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Phosphoric acid fuel cells use phosphoric acid as the electrolyte and a platinum catalyst in the electrodes which makes the cell a bit costly. The upside includes less sensitivity to poisoning by external agents unlike the PEM cells described above. Moreover these cells are highly efficient and can be used upto power requirements ranging upto a couple of megawatts thus making them suitable for large installations.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Apart from these two types there are several other types of fuel cells as well which include but not limited to Molten Carbonate, Alkaline, and Direct Methanol fuel cells, each of them having their own unique features, power range and advantages as well as drawbacks.</span><br /><br /><strong><span style="font-family:verdana;">Suitability for Data Center Power Supply Back up</span></strong><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">We have already seen that fuel cells provide a source of clean, noise free and high quality electric power which can be produced continuously provided the fuel supply is maintained. This makes them more suitable for power back up applications in data center situations where the use of batteries would be limited in their time for back ups, while engines would produce a lot of noise, heat and pollution which would have to be taken care of as well. Data centers consist of sensitive and important machines and equipment and hence fuel cell provide an ideal medium of power supply back up in case of main power failure.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">The future of fuel cells certainly seems bright not only in typical applications involving power supply for data centers but also in general as well. Given the increasing environmental concerns related to production of energy via conventions sources and the advantages that fuel cells have to offer, they certainly offer a very promising source of non-conventional energy. Data centers on the other hand also are becoming more power hungry by the day and these two factors combined together would go a long way in promoting fuel cell technology.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-91965823000592901642009-03-07T07:30:00.001+05:002009-03-07T07:30:00.563+05:00Key Formulas for Data Center Meaningful Reports<span style="font-family:verdana;font-size:85%;color:#c0c0c0;">Written by Rakesh Dogra</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Today’s data center manager is being asked to do more with less. Included on their list is the preparation of reports on efficiency, shortcomings and strengths of the data center. Creation of such reports is only relevant if provided with meaningful information based on metrics, benchmarks and formulas.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Data centers come in all shapes and sizes; therefore it is difficult to provide a specific template which works equally for all. Key metrics, benchmarks and formulas are important to start with. The focus of our discussion will be on formulas. These formulas could be strictly mathematical formulas in the exact sense of the term, or simply tips to create meaningful reports in the broad sense.</span><br /><br /><span style="font-family:verdana;"><strong>Formula 1:</strong> Since data centers are huge consumers of energy, it is important that this energy be utilized efficiently. To examine energy efficiency there are some metrics with formulas that are available.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Groups such as the Uptime Institute have been preaching the importance of corporate average datacenter efficiency or CADE. CADE is a set of four metrics which together rate the business performance of a single data center or the weighted average performance of a group of data centers.</span><br /><br /><span style="font-family:verdana;">CADE separately identifies the IT and facility efficiency of a data center, examining both energy efficiency and capital utilization. The four components of CADE with its formulas are:</span><br /><span style="font-family:Verdana;"></span><br /><span style="font-family:verdana;">• Facility Asset Utilization</span><br /><span style="font-family:verdana;">- How much of a facility's power and cooling capacity is being used?</span><br /><span style="font-family:verdana;">- Basic measurement concept: [Current IT load] / [Maximum IT load capacity]</span><br /><span style="font-family:verdana;">• Facility Energy Efficiency</span><br /><span style="font-family:verdana;">- How much of a facility's total incoming energy ends up being consumed by IT equipment?</span><br /><span style="font-family:verdana;">- Basic measurement concept: [Current IT load] / [Current total facility energy], equivalent to the Green Grid's DCiE or 1/PUE under certain conditions</span><br /><span style="font-family:verdana;">• IT Asset Utilization - How much IT compute asset capacity is being utilized?</span><br /><span style="font-family:verdana;">- Basic measurement concept: [Average volume server CPU utilization] • IT Energy Efficiency</span><br /><span style="font-family:verdana;">- How effectively does the data center's IT equipment transform energy into "useful IT work?"</span><br /><span style="font-family:verdana;">- Basic measurement concept: [useful IT work] / [IT watts]. Since industry-wide definitions of useful IT work are still under development, CADE uses an arbitrary baseline value of 5% for IT Energy Efficiency.</span><br /><br /><span style="font-family:verdana;">More recently the Green Grid has developed two new metrics for energy efficiency. DCiE stands for Data Center infrastructure Efficiency and it gives a measure of the efficiency of the data center by taking the ratio of the energy consumed by the IT segment of the data center to the overall power consumption of the data center which includes other sources of power consumption such as cooling paraphernalia and so forth.</span><br /><br /><span style="font-family:verdana;"><strong>Formula 2:</strong> Another Green Grid creation and related parameter to DCiE is the PUE or the Power Usage Effectiveness and mathematically it is the inverse of the DCiE which means that it gives a measure of the total data center power consumption divided by the power consumed by the IT infrastructure, and is expected to be at least having a value of 2.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;"><strong>Formula 3:</strong> The power consumption formula must be understood by the data center manager if the first two formula have to be applied. Basically power consumption is measured in terms of KWh pronounced as Kilo watt hours.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">If a device or equipment has a rated power of W Watts and that equipment is used for say n hours then the number of KWh consumed is given by</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">W * n/1000 KWh<br /></span><br /><span style="font-family:verdana;"><strong>Formula 4:</strong> Another formula which is useful to find out the heat conversion of electronic devices is given by<br /></span><br /><span style="font-family:verdana;">Heat generated = Wattage * 3.412 which gives the heat in BTU/Hr or British Thermal Units per Hour<br /></span><br /><span style="font-family:verdana;">Though all energy input to a devices does not get converted to heat, yet this formula gives a broad idea of the generated heat if cooling requirements have to be calculated. In terms of air conditioning terminology 1 ton of air conditioning is equivalent to 12, 000 BTU per hour. Of course the exact requirement could be estimated by an HVAC engineer but this formula would be good enough to give a rough estimate to the data center manager.<br /></span><br /><span style="font-family:verdana;"><strong>Formula 5:</strong> as a general rule of the thumb rather than exact formula, it has been estimated that the average power requirement for every one square foot of data center space is roughly two watts. This formula can be used to calculate the lightning load of the data center that needs to be lit up.<br /></span><br /><span style="font-family:verdana;">Another rule of the thumb assumes that nearly half of the power requirement of the data center is for cooling purposes, while nearly 36% is for critical loads. The remaining is shared by lightning (3%) and battery chagrining and UPS consumption (11%). Of course it must be kept in mind that these are just average figures and could vary substantially per situation.<br /></span><br /><span style="font-family:verdana;">Finally here are several others in no particular order.</span><br /><ol><li><span style="font-family:verdana;">Asset Efficiency (AE) = (IT Energy Efficiency) x (IT utilization)</span></li><li><span style="font-family:verdana;">Data Center Density (DCD) = (Total CPU Cycles) / (Total Data Center Square Footage)</span></li><li><span style="font-family:verdana;">Data Center Productivity (DCP) = (Useful computing work) / (Total Facility Power)</span></li><li><span style="font-family:verdana;">Deployed Hardware Utilization Efficiency (DH-UE) = (Minimum Number of Servers Required for Peak Load) / (Total Number of Servers Deployed)</span></li><li><span style="font-family:verdana;">Deployed Hardware Utilization Ration (DH-UR) = (Number of Servers Running Live Applications) / (Total Number of Servers Actually Deployed)</span></li><li><span style="font-family:verdana;">Facility Efficiency (FE) = (Facility Energy Efficiency) x (Facility Utilization)</span></li><li><span style="font-family:verdana;">Storage Density = (Storage Utilization) / (Total Data Center Square Footage)</span></li><li><span style="font-family:verdana;">Storage Utilization = (Server, Network and Backup Storage in Use) / (Total Storage Available)</span></li><li><span style="font-family:verdana;">Storage Automation = (Human Operators) / (Storage Density)<br /></li></ol></span><span style="font-family:verdana;">Hence we see that there are several formulas that are available to help measure your efficiency and your data center productivity. There are others that cover areas including networking, real estate, IT production and more. We will cover them in future articles.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-67976850771329277052009-03-06T06:57:00.001+05:002009-03-06T06:57:00.590+05:00Cyber warfare – how secure are your communications?<span style="font-family:verdana;font-size:85%;color:#c0c0c0;">Written by Mike Simms</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Almost every week the media reports on negligent loss of data, much of it highly sensitive. Perhaps with so many people using so much data in so many different places we should not be so surprised.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Today more and more organizations – emergency services, government departments and financial institutions – hold information nationally and access it nationally, and, in some cases, offshore it.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">There is relatively little offshoring of information by government. But corporate organizations, credit helpdesks and so on hold their customer relations management overseas.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">They share information over the web with a vast number of IT systems and databases. It is almost impossible for anyone to know on what scale this information is accessible.</span><br /><br /><span style="font-family:verdana;">The aggregation of information, in itself, escalates the level of sensitivity. So there is greater risk of abuse or corruption, either intended or accidental, as in the loss of the child benefit database last year.</span><br /><br /><span style="font-family:verdana;">Unfortunately, shared technology increases risk, and criminals and vandals are using this same technology to remotely attack data systems. These attacks can be very successful, and by their nature make the deterrent of legal action more difficult.</span><br /><br /><span style="font-family:verdana;">We are faced with different threat levels to network-based information systems. These range from the careless user who leaves a disc on a train to foreign intelligence services who engage in cyber warfare against perceived enemies.</span><br /><br /><span style="font-family:verdana;">An example of the latter centres on the Russian incursion into Georgia in response, they said, to Georgia’s attack on the breakaway republic of South Ossetia. In the weeks leading up to this, Russia had disabled the Georgian president’s website with a massive spam attack – what is known in the trade as a ‘denial of service attack.’</span><br /><br /><span style="font-family:verdana;">So in the quest to satisfy the network-enabled world’s increasing demand for effective data protection, the first step is an accurate assessment of risk.</span><br /><br /><span style="font-family:verdana;">At the lowest level, but the most common source of threat, are the millions of users themselves. They might lose a data stick, leave a laptop on public transport, or write their password on a Post-it note and stick it on their computer screen! </span><br /><br /><span style="font-family:verdana;">Next up are the service providers. With outsourcing on the rise you need to be confident your service providers conduct rigorous processes in how they look after their networks and information.</span><br /><br /><span style="font-family:verdana;">Higher still are the amateur hackers, of which there are many, although they are opportunistic and immediately they hit a firewall will probably move on.</span><br /><br /><span style="font-family:verdana;">At the pinnacle of threat are sophisticated hackers who are often linked to criminal gangs, and foreign intelligence services. These may be relatively few in number – but they have a lot of resources behind them, and therefore need correspondingly greater efforts to fight them.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Assessing the appropriate level of response for each of these threats is therefore the starting point to resolving the problem. There is no point in overkill, locking down systems so tightly that it imposes on the system’s usability if the information it contains is fairly innocuous.</span><br /><br /><span style="font-family:verdana;">When it comes to protecting our data many of us, it seems, are still stuck in the Dark Ages. People think IT protection is just about the computer. It is not the computer but the system it is running on that is most vulnerable. We now need to concentrate on how to secure information as it is being transported across networks.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Putting all the necessary protection into computers would be expensive, so making sure that computers can operate on secure and trusted networks is important because of the way we work today, using laptops, working away from the office, all done over public networks.</span><br /><br /><span style="font-family:verdana;">In Britain, sophisticated information assurance services are being developed which span cryptography, computer network defence, intruder detection and business continuity.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Computer network defence is the front line of cyber warfare. For some clients such as government, banks and financial institutions this means real time 24/7 activities manned by people in special trusted locations, and constant updating of threats.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">It is vital to know what level of protection you need. But however good your information assurance is, if someone else has not taken adequate steps they are the weak link and your data is vulnerable because of them. In this network-enabled world we all depend on each other as never before.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-16091024101331960902009-03-05T07:50:00.001+05:002009-03-05T07:50:00.165+05:00Cutting your data center education budget is a mistake<span style="font-family:verdana;font-size:85%;color:#c0c0c0;">Written by Staff Writer</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">During this tight economy one of the first items to get cut from IT budgets has been travel and continuing education. For data center professionals who rely on continuing education to keep them informed on the rapid changes in IT and the Data Center this poses a problem.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">According to various studies, including the U.S. Department of Labor, adults complete continuing education for a variety of reasons. The most prominent reason is for personal accomplishment, with learning things they are interested in as a second reason.</span><br /><br /><span style="font-family:verdana;">Continuing education during a recession is a tough sell for employers. Why, because it is an easy budget cut that will help the organization weather the storm. The problem is that most employees view continuing education as a way for them to better themselves to move up the ladder in their work place. When continuing education is removed or reduced from the budget it eliminates or reduces opportunities for employees to stay abreast of innovations in their profession and reduces morale.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Continuing education programs grease the wheels of career transition, permitting candidates to move into "demand occupations.” The problem with this statement during a recession is that there are fewer career transitions other than layoffs.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Employment placement professionals recommend that if you are on the short list for job reductions in your office then you should suggest continuing education as an alternative to being laid off. A continuing education program that helps your employer with skills he or she needs will help your employer ride out the downturn and invest in skills for the upturn.</span><br /><br /><span style="font-family:verdana;">If you believe that continuing education will help you and your company then you should fight for it. Ask yourself how the additional formal education will enhance your performance at work. Even more important, you need to think about how the educational experience will be more valuable in the end than the work time you’ll sacrifice while going to training. If you can verbalize that and provide specific examples of how the training or certification will enhance your contributions to the company, you’ll make a convincing case. As a side note, be sure to properly explain and coordinate how your absence will be covered during your training time.<br />Recently we interviewed Jill Eckhaus, CEO of AFCOM, a leading association supporting the educational and professional development needs of data center professionals around the globe. Jill knows first hand from the AFCOM membership how important continuing education is to them.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Q: Industry reports are noting continuing education will be impacted negatively this year due to the global economy. What is AFCOM hearing from its membership? Are end users cutting back on continuing education? How will it impact your Data Center World events this year?</span><br /><br /><span style="font-family:verdana;">A: The global recession is affecting budgets in every sector, and data centers are certainly not exempt. According to a survey given to AFCOM members in November of 2008, 49% of respondents have been asked to decrease their budgets in 2009 – the average budget decrease being 15% over their 2008 budgets. Continuing education might seem like a logical cut during times of financial stress, but it’s imperative that data center professionals continue to learn and expand their skills. In order to assist its members, AFCOM will continue to open new local chapters. AFCOM has 38 chapters throughout the world today and continues to grow this program. Members are able to expand their knowledge and build relationships with peers at these local chapters, which costs little to no money.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Data Center World (</span><a href="http://www.datacenterworld.com/"><span style="font-family:verdana;">www.datacenterworld.com</span></a><span style="font-family:verdana;">) provides five days of intense education and is the largest tradeshow in the industry that displays products and services for the entire data center – from operations to facilities management. In recognition of across-the-board belt tightening, the Data Center World Expo will offer several discussions on saving money in the data center; the show’s closing session is an open forum on how to continue to thrive and be efficient data center managers while spending less. For members who are unable to attend, AFCOM will be offering USB drives containing materials from the educational sessions.<br /></span><br /><span style="font-family:verdana;">Q: How do you convince organizations and your membership that a skilled workforce will always result in increased economic productivity?</span><br /><br /><span style="font-family:verdana;">A: Luckily, most of our members are aware of the fact that an educated workforce can only be beneficial. The data center changes more rapidly than any other department in an organization, and must continuously adapt to new technologies and responsibilities. Data center professionals who are equipped with the most current knowledge and skills are inherently more efficient, which translates into short- and long-term financial savings. Associations can help to justify participation in industry events by illustrating how attendance can help save money in the long run, and by showing that peer interaction and understanding trends is vital to organizations. For instance, AFCOM members who wish to attend Data Center World - but can’t get employer approval this year - are coming to us for help, so we’ve put a link on our website which breaks down the cost-saving techniques attendees will learn and can implement in their own data centers. AFCOM support staff is available for additional help should that information not be enough.</span><br /><br /><span style="font-family:verdana;">Q: Do you believe that continuing education can be seen as a way to retain the better, more educated employees?</span><br /><br /><span style="font-family:verdana;">A: Educated employees are always ahead of the game. Because this industry changes so rapidly, it’s vital that data center professionals stay abreast of trends and developments. Investing in education for current staff means less employee turnover, which leads to big bottom-line savings for organizations.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Q: If an AFCOM member asks you to provide them a list of reasons why they should attend a live data center continuing education event then what would you tell them?</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">A: There is truly no better way to learn about industry trends than from experts in the field, and no better place to discover new products and services that will help with projects than conferences and events wholly dedicated your field. Data center events are unparalleled places to network with peers. Attending a live event allows data center professionals to be interactive, giving them the chance to see and feel a product and know exactly what it does – nothing can ever replace that. The same with human interaction – meeting your peers face-to-face and hearing from experts in your field in live forums will keep professionals more engaged, at the top of their game.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Q: Will distance or online continuing education opportunities impact live events and is AFCOM considering this route of training?</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">A: We firmly believe that although online training is a great supplement, it can’t take the place of face-to-face interaction at a live event. At this point, AFCOM does not have any online educational events planned, but is considering them for the future.</span><br /><br /><span style="font-family:verdana;">Q: Why is cutting your data center education budget a mistake?</span><br /><br /><span style="font-family:verdana;">A: The data center is the lifeline of every major organization. What companies must understand is that putting the data center in the hands of a manager who doesn’t know what to do will cause the company to suffer. For instance, if the data center goes down, the company goes down and stands to lose millions. In order to thrive in the face of rapid change – in new technology, breakthrough standards, and updated government regulations – data center professionals must be able to learn from others and have the best education possible.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-50897723169005331882009-03-04T07:36:00.003+05:002009-03-04T07:36:00.597+05:004 Tips For Clean Link Checking<span style="font-family:verdana;">It’s time for another SEO Basics post and this time getting more out of link building efforts is the topic.</span><br /><br /><span style="font-family:verdana;">Link building continues to be an important part of marketing and optimizing web sites and web marketers can often get distracted by quantity goals rather than quality. Link building efforts for search engine optimization purposes rely on clean links that can be crawled by search engine bots. But what’s a “clean crawlable link”? It’s one that is not blocked with Robots NoIndex meta tag, JavaScript redirect, blocked with robots.txt or a NoFollow tag.</span><br /><br /><span style="font-family:verdana;">There are </span><a onclick="javascript:pageTracker._trackPageview('/outgoing/www.seobook.com/archives/001792.shtml');" href="http://www.seobook.com/archives/001792.shtml" target="_blank"><span style="font-family:verdana;">hundreds</span></a><span style="font-family:verdana;"> of </span><a onclick="javascript:pageTracker._trackPageview('/outgoing/www.searchenginepeople.com/blog/the-definitive-list-75-of-link-building-techniques-in-2008.html');" href="http://www.searchenginepeople.com/blog/the-definitive-list-75-of-link-building-techniques-in-2008.html" target="_blank"><span style="font-family:verdana;">ways</span></a><span style="font-family:verdana;"> to attract and </span><a onclick="javascript:pageTracker._trackPageview('/outgoing/wiep.net/talk/link-building/link-building-strategies/');" href="http://wiep.net/talk/link-building/link-building-strategies/" target="_blank"><span style="font-family:verdana;">acquire links</span></a><span style="font-family:verdana;">. If link requests, article submissions or other high labor, low impact tactics are used, then it’s important to make sure the links acquired are good for both users and search engines.</span><br /><br /><span style="font-family:verdana;">Here are a few things to check for:</span><br /><br /><span style="font-family:verdana;"><strong>1. To check the robots meta tag</strong>, look at the page source code. If there is no robots tag, that’s fine.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">If there is a robots meta tag and it looks like this,</span><br /><span style="font-family:verdana;">meta name = “robots” content = “index, follow”</span><br /><span style="font-family:verdana;">that’s good - although it’s not really necessary on the part of the webmaster.</span><br /><br /><span style="font-family:verdana;">If the robots meta tag looks like this:</span><br /><br /><span style="font-family:verdana;">“robots” content = “noindex, nofollow”</span><br /><span style="font-family:verdana;">OR</span><br /><span style="font-family:verdana;">“robots” content = “index, nofollow”</span><br /><br /><span style="font-family:verdana;">that’s not good as far as links for SEO benefit.</span><br /><br /><span style="font-family:verdana;"><strong>2. To see if there is a JavaScript redirect of links from a desired page</strong>, put your cursor over the link and look at the url that appears in the status bar at the bottom of the browser. If it shows the correct link url, then in most cases, it’s ok.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">However, this can be faked within the JavaScript so if you can copy the displayed URL you may want to run the link through a tool like </span><a onclick="javascript:pageTracker._trackPageview('/outgoing/www.rexswain.com/httpview.html');" href="http://www.rexswain.com/httpview.html" target="_blank"><span style="font-family:verdana;">Rex Swain’s HTTP Viewer</span></a><span style="font-family:verdana;"> which will show you if the link redirects, what type and where.</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;"><strong>3. To see if robots.txt is blocking</strong> search engine spiders from crawling links on a page, then add the text “/robots.txt” to the end of the URL in question.</span><br /><br /><span style="font-family:verdana;">For example:</span><br /><span style="font-family:verdana;">http://www.articleblast.com/robots.txt<br /></span><br /><span style="font-family:verdana;">Here you will see:<br />User-agent:*</span><br /><span style="font-family:verdana;">Disallow: /administrator/</span><br /><span style="font-family:verdana;">Disallow: /cache/</span><br /><span style="font-family:verdana;">Disallow: /components/</span><br /><span style="font-family:verdana;">Disallow: /editor/</span><br /><span style="font-family:verdana;">Disallow: /help/</span><br /><span style="font-family:verdana;">Disallow: /images/</span><br /><span style="font-family:verdana;">Disallow: /includes/</span><br /><span style="font-family:verdana;">Disallow: /language/</span><br /><span style="font-family:verdana;">Disallow: /mambots/</span><br /><span style="font-family:verdana;">Disallow: /media/</span><br /><span style="font-family:verdana;">Disallow: /modules/</span><br /><span style="font-family:verdana;">Disallow: /templates/</span><br /><span style="font-family:verdana;">Disallow: /installation/<br /></span><br /><span style="font-family:verdana;">What the “disallow” instruction does above is to tell search engine spiders not to crawl the designated directories. In the case of the article sharing site above, if any articles are located in one of these directories, then the links within those articles to client sites are no good for SEO benefit. However, readers can still click on the links and arrive at the indicated destination.</span><br /><br /><span style="font-family:verdana;"><strong>4. To see if there is a no follow tag</strong>, right click on the link URL using a browser like MSIE or Firefox and click on “Properties”. See if there is an attribute called:</span><br /><br /><span style="font-family:verdana;">rel = nofollow.If that’s there, it’s no good for SEO.</span><br /><br /><span style="font-family:verdana;">If it’s not there at all or has another value besides “nofollow” then it’s probably ok.</span><br /><br /><span style="font-family:verdana;">You only have to do this once in most cases on any page of a particular site. The reason sites will do any of these 4 things is to hoard or “sculpt” their site’s PageRank. In the case of blogs, it’s to discourage comment spam.</span><br /><br /><span style="font-family:verdana;">Now here’s the rub: An exeperienced SEO professional will never do the above 4 steps manually. In almost all cases, they will either develop tools to automatically check for “clean links” or they won’t bother at all. It’s an important question to ask when working with a SEO consultant. Just because a client site’s inbound link count goes from 500 to 5,000 links doesn’t mean all of those links are created equal. In other words, quantity is nowhere near the complete measurement of a link building effort to improve search engine rankings.</span><br /><br /><span style="font-family:verdana;">Link building that is based on forms or submissions is very difficult to scale and the links are often nofollowed after it is discovered SEOs are populating the site with their content. Link building as a result of creating and promoting content worth linking to is high value, high impact and very scalable. However, if it is important to check link sources to determine their value for SEO benefit you can use the 4 steps above, create your own script to check them automatically or work with a SEO company that has their own.</span>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6273787287030407681.post-49000114146854796272009-03-03T07:47:00.002+05:002009-03-03T07:47:00.441+05:00The Latest on Power Usage Effectiveness (PUE)Written by Rakesh Dogra<br /><br />The technical language related to any field including IT is full of acronyms most of which are quite handy. For many of us in the data center world we are now aware of the latest acronym; PUE. If you don’t let me tell you that it stands for Power Usage Effectiveness. In this article we will discuss about this term and the latest buzz in the PUE arena.<br /><br />Data centers are energy hungry monsters and their need for energy keeps rising continuously. It reminds of a mythical monster in the Oriental philosophy that was named Sursa whose mouth increased in size in proportion to the prey that had to be devoured.<br /><br />Attempts are being made to tame the data center monster so that they perform to their maximum possible ability while consuming the least possible amount of power by having the maximum possible energy efficiency. But then there needs to be certain parameters which define the efficiency and give a standard way to measure it.<br /><br />The Green Grid which is a non-profit organization of IT professionals came out with two metrics known as PUE and DCiE which are mathematically reciprocal of each other, though the former has got a wider acceptance in the industry.<br /><br />Mathematically the Power Usage Effectiveness is defined as the ratio of the total power consumed by the data center to the power consumed by the IT equipment.<br /><br />PUE basically gives you an idea about the total power being consumed and the amount going towards actual computing purposes. The total power includes the power supplied to cooling equipment, chillers, lighting, storage nodes etc in addition to the IT power which counts the power delivered for servers, workstations, switches and so forth that are directly involved in the information processing going on in the data center.<br /><br />This means that the PUE of an ideal data center would be one while on the upper limit it could go anywhere upto infinity. Of course the value of 1 is just a utopian idea and I don’t think there will be every a data center with such a value at least not with the current technology.<br /><br />Regarding the actual value of PUE in the industry leading companies such as Google and Microsoft which have some of the largest data centers are really very efficient. A recent report published by Google stated that the efficiency of their data centers is in the region of 1.21 while Microsoft also expects their new data centers to have a value in the similar range of about 1.22.<br />However not all enterprise data centers have that value of PUE but lie somewhere in the region of 2.0. This effectively means that for every one unit of power which goes into their servers and other IT equipment, one extra unit of power is required for other purposes such as cooling equipment. A data center with a value of PUE above 3.0 is considered really poor in terms of its energy efficiency.<br /><br />The increasing attention being given to data centers by the new US President may push the PUE metric forward. If the recommendation provided by IBM’s CEO comes to fruition then we will see Federal data centers more efficient in 3 years. To achieve this goal will require a measurement that may include PUE or some other metric. If a energy efficient measurement is adopted in Federal data centers it may not be long before it is adopted in data center throughout the US.<br /><br />Reports and surveys have indicated that IT departments are less concerned about the energy efficiency of their facility and more concerned on the immediate cost savings. This frame of mind will need to change if energy efficiency in the data center is to have traction. It just may require penalties in the form of a tax from the Federal government or utility companies to change the attitude toward energy efficiency.<br /><br />The adoption of such standards and metrics will also encourage manufacturers of IT equipment to produce more energy efficient products. The requirement will expand to other equipment such as mechanical and electrical systems and so forth.<br /><br />Lastly it can be said that despite the slow adoption, PUE has become one of the most talked about metrics in the data center industry. For PUE to take shape as a green standard for measuring energy efficiency will require the convincing or pushing of CIO’s and the remainder of the C suite.<br /><br />The true problem with PUE is not everyone is convinced that it is the most accurate measurement for energy efficiency in the data center. In addition, many believe that most data centers do not have the capability to properly measure and document such information.<br /><br />At the moment the Federal government has not made any announcements regarding a mandate to make Federal data centers energy efficient. It is likely because Federal agencies like the Environmental Protection Agency, Department of Energy and research groups they sponsor have not come to a clear consensus on how to properly measure energy efficiency.<br /><br />Other measurements such as Site infrastructure energy efficiency ratio (SI-EER) as defined by The Uptime Institute is essentially the same metric as PUE.<br /><br />Measuring data center energy efficiency will grow in importance as energy costs escalate and the demand for powering IT equipment pushes the data center envelope.<br /><br />Despite the lack of adoption, many including the originators of the Green Grid’s metric have been encouraging IT departments to monitor and document your data centers power and IT trends. The information may not provide an immediate insight, but it will give you a head start on tracking your data center against a mature metric along with provide a fingerprint of your data center energy consumption all of which most never had before.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-41528121485151408612009-03-02T07:43:00.003+05:002009-03-02T07:43:00.281+05:00Absence of Evidence Does Not Equal Innocence<em><span style="font-family:verdana;font-size:85%;color:#c0c0c0;">Written by Paul Thackeray</span></em><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Imagine, for a moment, that the delete button on the email client of all of your employees was permanently disabled. This would mean that your email users would be forced to save and organize their email into various folders within the email client, and then when their email file reached quota, your IT team would have to move all the old email into large PST files or other forms of backup.</span><br /><br /><span style="font-family:verdana;">This now means that you have email scattered all over the network in any number of stores. Now imagine that your organization is implicated in a lawsuit and the attorneys for the plaintiff have issued subpoenas for all of your electronic records, including email, related to the lawsuit. </span><span style="font-family:verdana;">How do you access those emails?</span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Well the good news in this scenario is that you at least have all of the email. Often businesses operate under the assumption that if there is no record of the topic in question, then they cannot be held responsible. This is simply not true. Businesses that delete email, even as part of its standard business practice, but have no way of retrieving it in the future, can still be held liable for the information contained within the deleted email. Simply having all of the email, however, is only half the battle. Companies must also have mechanisms in place to quickly search and retrieve the emails in question.</span><br /><br /><span style="font-family:verdana;">While the suggestion that a disabled delete key may seem like an extreme scenario, the concept behind it is important: Business email should not be deleted until the organization has some way to archive and, more importantly, retrieve email.</span><br /><br /><span style="font-family:verdana;"><strong>Archiving for the rest of us</strong></span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Organizations in heavily regulated industries, such as the financial, government and healthcare industries were among the first to put policies and solutions in place in order to satisfy regulatory standards for their specific markets. But all organizations, no matter what vertical, need to very carefully assess what risks they face by not saving email. </span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">It is an unfortunate fact that most organizations will at some point in the course of normal operations be implicated in lawsuits. Litigation discovery, or e-discovery, involves all parties in a lawsuit and requires that all data or information relevant to the lawsuit be provided as requested by the court of law. The cost of finding and producing such information can often outweigh the actual damages claimed in the lawsuit itself. This is most often the case for companies that are not using an email archiving solution.</span><br /><br /><span style="font-family:verdana;"><strong>Key features to look for</strong></span><br /><span style="font-family:verdana;"></span><br /><span style="font-family:verdana;">Message archiving solutions should have the ability to full index all email to enable simple search and retrieval of emails containing specific key words in an e-discovery request as well as for corporate policy control. Retention policies are also a key factor when determining which solution fits the needs of the organization; archiving solutions should have the storage capacity to keep email records for long periods of time in order to satisfy regulatory compliance standards. All functionality should be organized via a simple user interface that is easy for the administrator to use, but that also captures a high-level glimpse into the performance of the message archiving solution that can be easily demonstrated to management or legal counsel.</span><br /><br /><span style="font-family:verdana;">The bottom line: there is no single reason for implementing an archiving solution. But one thing is for certain, email must be retained by every organization that relies upon it as one of its main business communication channels. Deploying an easy-to-use solution will save a lot of time and resources for the organization in the long run. Further, it is a much simpler and more practical solution than disabling the delete key on the email client.</span>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-6273787287030407681.post-84047058106639701492009-02-28T07:46:00.001+05:002009-02-28T07:46:00.875+05:00Monitoring, Management and Service Frameworks<span style="font-family:verdana;"><span style="font-size:85%;color:#c0c0c0;"><em>Written by Jon Greaves</em></span><br /></span><br /><span style="font-family:verdana;">Since the first computers entered server rooms, the need to monitor them has been well understood. Earliest forms of monitoring were as simple as status lights attached to each module showing if it was powered up or in a failed state. Today’s datacenter is still awash with lights, with the inside joke being that many of these are simply “randomly pleasing patterns” and in all honestly, providing very little use.</span><br /><br /><span style="font-family:verdana;">In 1988, RFC1065 was released. Request for Comment (RFC), allowed like-minded individuals to band together and build standards. RFC’s - typically under the umbrella of organizations like the Internet Engineering Task Force (IETF), 1065 and two sister RFC’s - outline a protocol “Simple Network Management Protocol” (SNMP) and a data structure Management Information Base (MIB). SNMP was originally focused on network devices, but its value was soon realized covering all connected IT assets including servers.</span><br /><br /><span style="font-family:verdana;">Today, SNMP has been through three major releases and is still a foundation for many monitoring solutions.</span><br /><br /><p><span style="font-family:verdana;">At the highest-level, three forms of monitoring exist today:</span></p><ol><li><span style="font-family:verdana;">Reactive – a device (server, storage, network, etc.) sends a message to a console when something bad happens</span></li><br /><li><span style="font-family:verdana;">Proactive – the console asks the device if it is healthy</span></li><br /><li><span style="font-family:verdana;">Predictive – based on a number of values, the health of a device is inferred</span></li></ol><br /><span style="font-family:verdana;">Each of the above has pros and cons. For example, reactive monitoring tends to offer the most specific diagnostics, e.g. my fan is failing. One scenario exists which limits this as your only solution. Should the device die, or fall off the network, it will not generate messages. Since the console is purely reacting to messages, it is not able to determine if the device is alive and well, or completely dead. This is a major flaw in reactive monitoring solutions.</span><br /><br /><span style="font-family:verdana;">Proactive on the other hand, has the console polling the device at predetermined intervals. During each poll the console asks the device a number of questions to gauge its health and function. This solves the issue of reactive monitoring, but creates significantly more network traffic and load on the device. In fact, cases have occurred where devices have been hit so hard, they cannot operate.</span><br /><br /><span style="font-family:verdana;">So what typically happens, is reactive monitoring is paired with proactive polling to resolve this issue. You get the benefits of both solutions and negate the disadvantages.</span><br /><br /><span style="font-family:verdana;">While reactive and predictive monitoring may be the norm today, they still leave computer systems vulnerable to outages. As complexity continues to grow, a different approach to monitoring is needed. Two very interesting fields of research - prognostics and autonomics - are emerging to take on these challenges.</span><br /><br /><span style="font-family:verdana;">Prognostics make use of telemetry to look for early signs of failures often by applying complex mathematical modules. These modules take into account many streams of data and not only look at directly correlated failure conditions, but also what might best be described as the harmonics of a system. For example, by looking at the frequency of alarms and health data from multiple components of a system, small variations can be detected which can lead to failures.</span><br /><span style="font-family:verdana;"></span><br /><br /><span style="font-family:verdana;">This approach has been used with great success in other industries. The Commercial Nuclear Industry has deployed such an approach to help detect issues and false alarms. False alarms can result in the shutdown of a facility and cost millions of dollars per day. We also see many military applications for this kind of advanced monitoring including the next generation battle field systems and the joint strike fighter where thousands of telemetry streams are analyzed real-time to look for issues that could impact a mission.</span><br /><br /><span style="font-family:verdana;">While these applications seem far-fetched from the problems of monitoring today’s computer systems, several companies have made huge advances in this technology. Most notably, Sun Microsystems, who has used such approaches in several high end servers to not only detect pending hardware failures, but also applied to software to look for Software Aging where memory leaks, run away threads and general software bloat can lead to outages of long running applications. Pair detection of aging with “software rejuvenation” where applications are periodically cleansed, and large improvements in application availability can be realized.</span><br /><br /><span style="font-family:verdana;">Autonomics and autonomic computing can also be applied to these challenges to allow IT infrastructure to take corrective action to prevent outages and optimize application performance. Autonomic Computing is an initiative started in early 2001 by IBM, with the goal of helping manage complex distributed systems. This tends to manifest itself in tools implemented as decision trees, mimicking the actions a system admin might perform to correct issues before they become outages. Academia is leading the charge in this area with key projects in super computing centers where scale and complexity requires a new approach to attack this problem.</span><br /><br /><span style="font-family:verdana;">With the advances in systems monitoring and management also comes new kinds of risks - some of which can come from seemingly harmless data. Let’s take the example of a publicly traded company. This company outsources the hosting and management of its infrastructure. The application management company enables monitoring, the customer is careful to exclude any sensitive data from what’s being monitored. The customer just allows the basic data collected reporting on memory, disk, network and CPU. From first impression, this seems like harmless data.</span><br /><br /><span style="font-family:verdana;">Each quarter as the company closes its books, its CRM and ERP systems (both monitored) crunch the quarter’s data. For Q1, the customer has a great quarter as publicly disclosed in filings. The provider monitoring their environment now has a benchmark that one could infer transactional volume based on disk I/O, memory and CPU utilization. But let’s say the customer misses their numbers in Q2. Now, the provider has data that can infer a bad quarter. As Q3 is in the process of closing, and before the CFO has even seen the results - armed with just basic performance data from CPU, Memory and Disk - the hosting provider can now, in theory, predict the quarter’s results.</span><br /><br /><span style="font-family:verdana;">This simplistic scenario highlights the value of telemetry, even that which seems low risk in the future. As our ability to infer failures, performance, and eventually business results grows, new kinds of risks will emerge, requiring mitigation.</span><br /><br /><span style="font-family:verdana;">To this point we have focused on what basically is “node level” monitoring, i.e., the performance of a server or other piece of IT infrastructure and its health alone. This is, and will likely always be, the foundation for managing IT systems. However, it does not tell the full story - arguably the most important factor in today’s environments - of how the business processes supported by the infrastructure are performing.</span><br /><br /><span style="font-family:verdana;">IT Service Management focuses on the customer’s experience of a set of IT systems as defined by their business functions. For example, assume a customer has a CRM system deployed. While the servers may be reporting a healthy status, if the application has been misconfigured or a batch process is hung, the end user would be experiencing degraded operations while a traditional monitoring solution is likely to be reporting the system functioning and “green”. Taking an IT Service Management approach, the CRM solution would be modeled showing the service dependencies (e.g., depends on web, application and database tiers and requires network, servers and storage to be functioning). This model is then enhanced by simulated end user transactions and application performance metrics to identify issues outside the availability of the core IT infrastructure and statistics from an IT service desk. This holistic approach to monitoring provides greater visibility to CIO’s, typically expressed as a dashboard of how their IT investment is performing from their user community’s view.</span><br /><span style="font-family:verdana;"></span><br /><br /><span style="font-family:verdana;">Virtualization technology and its use to enable cloud computing has opened up many opportunities for organizations to realize the agility we all seek when it comes to our IT investments. Virtualization also has not simplified the administration of IT as was originally promised – instead, it has greatly increased it. Case in point, look at an example of a typical use of virtualization - server consolidation. Pre-consolidation, each server had a function, typically supported by a single operating system image running on bare metal. Should the server or operating system experience a problem, it was easy to uniquely identify the issue and initiate an incident handling process to remediate. In a consolidated environment, a single server may be running 10’s of virtual machines, each with their own unique function. These virtual machines may also be migrated between physical servers in an environment. Traditional monitoring solutions were not designed with the concept that a resource may move dynamically or even are offline for the time it’s not needed, and started on demand.</span><br /><br /><span style="font-family:verdana;">Now, taking the extreme of virtualization to the next logical level - cloud computing - today’s monitoring tools are taxed even more. Your servers are now hosted in an infrastructure/platform, and as a service provider, you have even less control of your resources. This hasn’t gone unnoticed by providers. In fact, over the past month, several monitoring consoles have been released (including for Amazon EC2) to start addressing this challenge. Independent solutions are also appearing, most noticeably Hyperic who launched </span><a href="http://www.cloudstatus.com/"><span style="font-family:verdana;">http://www.cloudstatus.com/</span></a><span style="font-family:verdana;"> where you can view Amazon and GoogleApp Engine’s availability by using “proactive monitoring”. The natural evolution will be these tools interfacing with more traditional solutions to give companies more holistic views of their environment. This takes an old concept of “Manager of Managers” to the next level.</span><br /><br /><span style="font-family:verdana;">Today’s computing architectures are really taxing the foundations of monitoring solutions. This does, however, create great opportunities for tools vendors and solution providers to attack. More so, it also brings more to the focus, the idea of IT Service Management where understanding the end users performance, expectations and mapping back to SLA’s becomes the norm.</span><br /><br /><span style="font-family:verdana;">A brief interview with Javier Soltero, co-founder and CEO of Hyperic, the leader in multi-platform, open source IT management.</span><br /><span style="font-family:verdana;"></span><br /><br /><div align="center"><strong><span style="font-family:verdana;color:#3366ff;">Question(s) and Answer(s)</span></strong></div><br /><span style="font-family:verdana;"></span><br /><strong><span style="font-family:verdana;">Q. Monitoring is typically seen as the last step of any deployment, often not considered during the development. Do you see customers embracing a tighter coupling of the entire software lifecycle with engineering IT Service Management Solutions?</span></strong><br /><br /><span style="font-family:verdana;">Absolutely, it’s a very encouraging trend especially among SaaS companies and other business that are heavily dependent on their application performance. The really successful ones spend time building a vision for how they want to manage the service. That vision then helps them select which technologies they use and how they use them. Companies that build instrumentation into their apps have an easier time managing their application performance and will resolve issues faster.</span><br /><br /><strong><span style="font-family:verdana;">Q. Customers are really embracing IT Service Monitoring as a key element to not only understand performance but also ROI for IT investments, what challenges do you see for customers to adopt these technologies?</span></strong><br /><br /><span style="font-family:verdana;">The biggest challenge we see is the customer’s ability to extract the right insight from the vast amount of data available. The usability of these products also tends to make the task of figuring out things like ROI and other business metrics difficult. Oftentimes a tool that can successfully collect and manage the massive amounts of data required to dig deep into performance metrics lacks an analytics engine capable of displaying the data in an insightful way, and vice versa.</span><br /><br /><strong><span style="font-family:verdana;">Q. End user monitoring has typically been delivered with synthetic transactions, this has certainly been a valuable tool. How do you see this technology evolving?</span></strong><br /><br /><span style="font-family:verdana;">The technology for external monitoring of this type will continue to evolve as the clients involved for these applications get more and more sophisticated. For example, a user might interact with a single application that includes components from many other external applications and services. The ability for these tools to properly simulate all types of end-user interactions is one of the many challenges. More important is the connection of the external transaction metrics to the internal ones.</span><br /><br /><span style="font-family:verdana;"></span><br /><strong><span style="font-family:verdana;">Q. Monitoring is one part of the equation, mapping availability and performance makes this data useful. With virtualization playing such a big part of datacenters today, how do you see tools adapt to meet the challenges of portable and dynamic workloads?</span></strong><br /><br /><span style="font-family:verdana;">The most important element of monitoring in these types of environments is visibility into all layers of the infrastructure and the ability to correlate information. Driving efficiency in dynamic workload scenarios like on-premise virtualization or infrastructure services like Amazon EC2 requires information about the performance and state of the various layers of the application. Providing that level of visibility has been a big design objective of Hyperic HQ from the beginning and it’s helped our customers do very cool things with their infrastructure.</span><br /><br /><strong><span style="font-family:verdana;">Q. How do you see monitoring and IT service management evolve as cloud computing becomes more pervasive?</span></strong><br /><br /><span style="font-family:verdana;">Cloud computing changes the monitoring and service management world in two significant ways. First, the end user of cloud environments is primarily a developer who is now directly responsible for building, deploying, and managing his or her application. This might change over time, but I’m pretty sure that regardless of the outcome, Web and IT operations roles will be changed dramatically by this platform. Second, this new “owner” of the cloud application is trapped between two SLAs: an SLA he provides to his end user and an SLA that is provided by the cloud to him. Cloudstatus.com is designed to help people address this problem.</span><br /><br /><strong><span style="font-family:verdana;">Q. Do you see SaaS model reemerging for the delivery of monitoring tools, where customers will use hosted monitoring solutions?</span></strong><br /><br /><span style="font-family:verdana;">Yes, but it will be significantly different from the types of SaaS based management solutions that were built in the past. The architecture of the cloud is the primary enabler for a monitoring solution that, like the platform that powers it, is consumed as a service.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6273787287030407681.post-28569996238564535862009-02-27T07:26:00.002+05:002009-02-27T07:26:02.225+05:00Making the Best Use of Your Security Budget in Lean Times: Four Approaches<span class="small" style="font-family:verdana;"><span style="COLOR: rgb(192,192,192);font-size:85%;" ><span style="FONT-STYLE: italic">Written by Elizabeth Ireland</span></span><br /><br /></span><span style="font-family:verdana;">Many predict 2009 will produce the tightest economic conditions in decades. The subprime meltdown, tight credit markets and recession conditions will mean most CIOs will feel the downward spiral of the economy right where it hurts -- in their IT budgets. </span><p style="FONT-FAMILY: verdana">Unfortunately, this also coincides with the most serious threat environment security professionals have faced. Hackers’ tactics are becoming more targeted. The increase in the number and business importance of web applications is generating additional enterprise risk. Budgets may get tight, but your responsibility remains the same: minimize risk. </p><p style="FONT-FAMILY: verdana">It’s a tall order in the face of possible spending cutbacks, but because budgets are tight, you have to be focused on how to best reduce risk, and it definitely doesn’t mean less attention on security. In fact, at times like these, that may be the biggest mistake. The highest levels of an organization are asking their CIOs “how do we know we’re secure?” The only way you will know that is by understanding the risks, better understanding the ROI, and how it fits into not only your other IT priorities, but also adds to the company’s bottom line. Defending the security budget is always a challenge, but here are four approaches that can help. </p><p style="FONT-FAMILY: verdana">1. Metrics make the most compelling argument. Ask yourself this question: Is your security risk going up or down over time and what is impacting it? This is baseline data that every organization needs and should be on track to monitor. If you cannot answer this clearly, realign your projects and priorities to make sure you can get this information on an ongoing basis. Every CIO should know at least three things: how vulnerable are my systems, how safely configured are my systems, and are we prioritizing the security of the highest value assets to the business? Though security metrics are in the early days of development and adoption, the industry is maturing and solid measurements are available. These areas can be assessed and assigned an objective numeric score, allowing you to set your company’s own risk tolerance and use that to make critical decisions about where to allocate funds. As you face increased budget scrutiny, the metrics allow you to identify – and defend as necessary-- where your security priorities are, and how security and risk fit into overall ROI.<br /><br />2. Compare your baseline to others in your industry. The guarded nature of security data means CIOs trying to access this type of information will have to get creative. A good place to start is the Center for Internet Security -- their consensus baseline configurations can be used as a jumping off point to identify areas of risk. Vertical industry benchmarks will be an evolving area, and another source may be what you can learn from your personal relationships. Seek out others within your industry and find out what metrics they are using and what they are spending as a percentage of their IT budget. Risk tolerance is specific to each organization, but there are similarities within industries that could prove to be helpful.<br /><br />3. Learn from other areas in your company. Many process-oriented disciplines can be a good area as a proxy for the type of evolution facing security; network operations are a good example. In the early days of network operations, the only scrutiny came if things weren’t working correctly. Over the years, it has matured to a level of operational metrics for uptime and performance, and is embedded in quarterly and annual performance goals. These metrics allow a continuous cycle of performance, measurement and improvement. In addition, network operations can provide an important lesson of single solution economies of scale. Find solutions that work across your entire enterprise—this is the only way to get economies of scale in implementation and ensure you get the critical enterprise-wide risk information that can deliver the metrics you need.<br /><br />4. Take steps to automate your compliance process. Are you compliant and can you routinely deliver the reports that auditors request? The economic benefits that come from doing this correctly are significant. Audit costs are directly related to how complicated it is to audit and prove the integrity of a business process, so finding a way to save the auditors’ time is one of the single biggest opportunities to drive down costs. Even though your audit costs may be hitting the finance area’s budget, meet with your company’s finance team to understand what audits are costing you, and how the right kind of automation could lessen them and there will certainly be time and resource savings for the security team as well. There isn’t an exact recipe for compliance automation, so talk to your auditors, look at your environment, and begin the discovery of how much time is spent preparing for and reacting to audits. If you’re a company that allows your divisions to individually automate, it’s time to think about taking those principles enterprise-wide. </p><p style="FONT-FAMILY: verdana">Regardless of budget conditions, you will still be faced with decisions on which projects have the biggest impact on the business. The threat environment requires that you make the absolute best decisions with your available budget by investing in the right places and getting better use of your resources. Lastly, remember that times of difficulty are often the times of opportunity. Lessons learned now in the face of tighter budgets can spark valuable models of efficiency and progress for the future.</p>Unknownnoreply@blogger.com0