Wednesday, March 25, 2009

IT Service Management - Metrics

Written by Tsvetanka Stoyanova

Metrics and the other ways to measure performance are very popular among technical people. Almost every aspect of a computer’s performance can be and is measured, however when it comes to service metrics for IT personnel and organizations this is one area that companies should pay close attention to.

Computers or machines are easier to measure because there are little to no subjective factors. But with organizations, and especially with people, the subjective factor becomes more and more important and frequently, even if the best methodology is used, the results obtained from metrics are, to put in mildly, questionable.

Who Needs IT Service Management Metrics

Metrics are used in management because they are useful. Metrics are not applied just out of curiosity but because investors, managers and clients need the data.

There is no doubt that metrics are useful only when they are true. I guess you have heard Mark Twain's quote about “lies, damned lies, and statistics” (or in this case – metrics). True metrics are achieved via using reliable methodologies. It is useless just to accumulate data and show it in a pretty graph or in animated slideshow. This might be visually attractive but the practical value of such data is null.

However, even when the best IT Service Management metrics methodology is used, deviations are inevitable. Therefore, one should know how to read the data obtained from metrics. It is also true that metrics, including IT Service Management metrics, can be used in a manipulative way, so one should be really cautious when he or she reads metrics and above all – when making decisions based on these metrics.

Where to Look for IT Service Management Metrics

There are several metric methodologies in use for IT Service Management, so you can't complain about the lack of choice. Some of these IT Service Management metrics methodologies have been borrowed (with or without adaptation) from other industries, while others have been specifically designed for IT Service Management.
Many organizations, including ITIL and ITSM regularly publish books and reports on IT Service Management and even though these are not the only organizations, which define the de-facto standards for IT Service Management metrics, there books and reports are among the top authorities in the field. A short abstract from the “Metrics for IT Service Management” book by Peter Brooks can be found here: The sample shows the TOC and includes the first couple of chapters, so if you have the time to read it, it should give you a more in-depth idea of what IT Service Management metrics are and how to use them.


In addition to the general metrics for IT Service Management, there are sets of metrics for the different areas of IT Service, such as configuration management, change management, etc. Therefore, if you are interested in measuring only a particular subarea of your IT services, you don't have to go through the whole set of IT metrics just to get the information for the area in question. Many IT consulting companies have also developed benchmarking and other methodologies that measure IT Service Management and these documents are also useful.
In addition to ITIL, ITSM, and the various consulting companies, another place where you can get IT Service Management metrics ideas from are the sites and the marketing materials (i.e. white papers) of vendors of software products for IT Service Management. Some of these vendors implement the metrics of other organizations. This is why IT Service Management metrics are often similar and sometimes they are just the same set but from a different angle, which of course can lead to different results.


There are many vendors that you can access by conducting a search on your search engine. Whenever possible get a trial version (if the vendor offers one), give it a test run and decide for yourself if what you got is what you need. As I already mentioned, IT Service Management Metrics are only useful when true. That is why you will hardly want to waste your (and your employees') time and money on a set of IT Service Management metrics, which are not applicable in your situation.

With so many metrics that lead to so many different results in the same situation, one sometimes wonders if IT Service Management metrics do actually measure one and the same thing and if they are of any good, Yes, IT Service Management metrics are useful but only when used properly.

Tuesday, March 24, 2009

What can log data do for you?

Written by Lagis Zavros

Organizations today are deploying a variety of security solutions to counter the ever increasing threat to their email and Internet investments. Often, the emergence of new threats spawns solutions by different companies with a niche or a specialty for that specific threat - whether it is a guard against viruses, spam, intrusion detection, Spyware, data leakage or any of the other segments within the security landscape.

This heterogeneous security environment means that there has been a proliferation of log data generated by the various systems or devices. As the number of different log formats increases coupled with the sheer volume of log data, the more difficult it becomes for organizations to turn this data into meaningful business information.

Transforming data into information means that you know the “who, what, when, where, and how” - giving you the ability to make informed business decisions. There is no point capturing data if you do not use it to improve aspects of your business. Reducing recreational web browsing, improving network performance, and enhancing security, are just a few outcomes that can be achieved using information from regular log file analysis.

To achieve these outcomes, it is important for organizations to have a log management process in place with clear policies and procedures and also be equipped with the appropriate tools that can take care of the ongoing monitoring, analysis and reporting of these logs.

Having tools that are only used when a major problem has occurred only gives you half the benefit. Regular reporting is required in order to be pro-active and track patterns or behaviours that could lead to a major breach of policy or impact mission critical systems.

10 tips to help organizations get started with an effective proactive logging and reporting system:

1. Establish Acceptable Usage Polices
Establish policies around the use of the Internet and email and make staff aware that you are monitoring and reporting on usage. This alone is an effective step towards reducing inappropriate usage, but if it’s not backed by actual reporting, employees will soon learn what they can get away with.

2. Establish Your Reporting Requirements
Gather information on what you want to report and analyse. Ensure this supports your obligations under any laws or regulations relevant to your industry or geography.

3. Establish Reporting Priorities
Establish priorities and goals based on your organization’s risk management policies. What are the most important security events that you need to be alerted to?

4. Research your existing logging capabilities
Research the logging capabilities of the devices on your network such as proxy servers, firewalls, routers and email servers and ensure they are producing an audit log or event log of activity.

5. Address shortfalls between your reporting requirements and log data
Open each log file to get a feel for what information is captured and identify any shortfalls with your reporting requirements. Address any shortfalls by adjusting the logging configuration or implementing an independent logging tool such as WebSpy Sentinel.

6. Establish Log Management Procedures Establish and maintain the infrastructure and administration for capturing, transmitting, storing and archiving or destroying log data. Remember that archiving reports may not be enough as sometimes you may be required to go back and extract from the raw data.

Ensure data is kept for an appropriate period of time after each reporting cycle and that the raw data related to important events is securely archived.

7. Evaluate and decide on a Log File Analysis Product
Evaluate log file analysis and reporting products such as WebSpy Vantage to make sure your log formats are supported, your reporting requirements are met and that it is capable of automated ongoing reporting.

Ensure it can be used by business users as well as specialist IT staff, removing the dependence on these busy and critical staff members. Make sure the vendor is willing to work with you to derive value from your log data. Often a vendor that supports many different log formats will have some insight that may help you in obtaining valuable information from your environment.

8. Establish Standard Reporting Procedures
Once a report product has been decided on, establish how regularly reports should be created, who is responsible for creating them, and who is able to view them. Store user reports in a secure location to ensure confidentiality is maintained.

9. Assign Responsibilities
Identify roles and responsibilities for taking action on events, remembering that responsibility is not only the security administrator’s domain.

10. Review and Adapt to Changes Because of the metamorphic nature of the security environment it is important to revisit steps 1-9 regularly and fine tune this process to get the maximum value.

Are you Web2.0 Savvy?

Web2.0 social networking is now a part of our cultural fabric. Once considered a casual pastime for teenagers it has now exploded into “the must do thing” for corporate businesses. The transition from being a teenage e-tool to one that the corporate world is looking at as a must participate tool has come a long way.

Bookstores are filling with the views and angles of authors on the Web2.0 social networking phenomena. One book in particular has caught my attention. “Throwing Sheep in the Boardroom” is a cute title that amply describes the initial perception of social networking’s impact on businesses. A younger generation of professionals that are Web2.0 social mavericks have been integrating their work, marketing, social activities and networking via the social networks and the older crowd occupying the board rooms is just now starting to see its importance.

The book authored by Matthew Fraser and Sounitra Dutta and published by John Wiley & Sons, Ltd., provides a clear picture on the impact that Web2.0 is having on our lives and angles it against the corporate boardroom reluctance to embrace the technology as a tool to harness the benefits of collaborative environments.

Data center and IT professionals are no strangers to working online and for the most part are already engaged in some form in the social network scene. For instance, the growth of blogging has carried many well known IT bloggers into the social networking stardom. Blogging is only a part of the Web2.0 scene that many IT socialites are familiar with.

Small technology social groups formed within networks such as LinkedIn and Facebook have exploded. The desire to connect with others who speak and understand IT has always been around, but now Web2.0 has made it easier for them to do so.

The latest craze gaining publicity has been Twitter. News outlets, politicians and journalists are twittering daily. Twitter (www.twitter.com) allows its users to send a 140 character message to the Twitter community (which can be keyword searched) and more specifically directly to your followers.

News agencies and politicians are using this tool to share with their followers and constituents on the latest updates of their day. This new tool provides those who follow an “insider” view with instant news the moment it happens. Recently CNN reported that during a news conference attendees were frantically twittering on their phones to the Twitter network on what they were seeing, hearing and feeling at the moment it happens.

The social networking craze is and needs to be a part of every marketing manager’s daily routine. If you are not on LinkedIn (the adult version of Facebook) or MySpace, Twitter, ReJaw, Plaxo or countless of other networks worldwide then you are missing an opportunity.

The social networks are a direct connection to a younger generation that have and will continue to influence the IT industry. The social networks can provide you an insider view and opinion on products, services or just about anything. If you want to get the pulse on what is going on then you need to invest some time and submerse yourself in the Web2.0 social networking scene.
A quote from the book states, “Web 2.0 tools are becoming powerful platforms for cooperation, collaboration and creativity.”


“If you are not embracing the Enterprise 2.0 model, you risk getting left behind,” says Fraser, coauthor along with Dutta of Throwing Sheep in the Boardroom: How online social networking will transform your life, work and world.

If you are a marketing manager, sales person or follower of any subject this book is a must read to better understand and prepare yourself for the “e-ruptions” that will be created by the Web2.0 social networking revolution. To learn more visit www.throwingsheep.com or purchase a copy at a bookstore, or direct from the publisher by calling 800-225-5945.

Monday, March 23, 2009

Ensuring your data center facility is compliant

Written by Rakesh Dogra

Data centers are increasingly becoming ever more important in literally all walks of business, commerce and industry; having their presence felt in all fronts in these areas. Due to such a prominent place which they are achieving, their impact on the normal activities is increasing as well and any disruptions to these data centers could brings business and commercial activities to a standstill at least temporarily causing huge loss to the company, clients, reputation and most importantly the valuable and often confidential data and/or information which these data centers handle and process.

Therefore governments and regulatory bodies have been increasingly putting data centers under their scanner and there are increasing attempts to put more regulatory mechanisms in place to ensure that the data center comply to certain minimum standards across various platforms. This would help to ensure consistency and uniformity at least on the lower side of quality and efficiency across the entire data center industry.

What to Comply with?

Compliance has to be done by adhering to certain benchmarks which are defined as various regulations and directions set forth by the appropriate bodies. As far as data centers are concerned there are various benchmarks to which these centers should comply and these include several such regulations. It must be noted that all regulations may not apply to all types of data centers as we shall see below where some of these regulations have been listed:

Sarbanes Oxley Act – this is a US Federal Act which is applicable to all public companies and does not necessarily apply to privately held companies. This act has various sections which deal with different areas of compliance e.g. sections 302 and 404 are concerned with implementing internal security controls.

HIPAA - it is basically related to health care services and hence would effect data centers that process information related to hospitals and other medical facilities since this act also covers security of the electronically stored information related to the patients and their medical condition.

Similarly there are several other regulations to which data centers should comply. Some of them deal with safe operation of the electrical equipment while others ensure that safe working practices are followed in all areas of the data center.

Ensuring Compliance

It could be daunting task to comply to various regulations to which a data center is subject. Nevertheless this does not mean to say that there should be any lapse on the part of the management or staff to ignore or take these compliance issues lightly. The first step to ensure compliance would be to find out what all regulations does a data center need to comply to. This is necessary since as already mentioned, all regulations do not necessarily apply to all data center facilities but could vary with the type of data center, location and the services that it provides.

The data center management needs to find out the exact compliance requirements and it can take the help of professional third parties if they are not fully capable of doing such an analysis. Some of the regulations might need compliance at the very initial stages such as laying out the electrical system in compliance with relevant safety standards while others require compliance at later stages of the data center life.

After the various regulations have been found, the management needs to ensure compliance to every single regulation and take steps necessary to ensure that the data center adheres to the suggested guidelines. Again it might be necessary to take external professional help if the data center is small and short of resources and cannot do this own their own.

It must be remembered that one of the most important steps to ensure compliance is to ensure that the required documentation and paperwork are upto date, since compliance not only needs to be present in actual workplace but also needs to be documented and recorded for reference and regulatory purposes in order to ensure that everything is as it should be.

Procedures and Work Policies

There should be set procedures for carrying out all the important activities which could otherwise lead to serious damages due to slight negligence or mistakes. Experience has shown that minor human errors are one of the most important causes of failures in data centers which could have been avoided, had the management been little more careful in designing and laying out procedures for work.

A simple example which confirms this fact is an incident which was reported some time earlier that a data center simply got shut down because an employee pressed the emergency stop switch by mistake which cost the data center a lot of money apart from the loss of clients due to disruption of critical activities.

Laying down procedures is itself an elaborate task which needs to be done after careful consideration and in tune with the preferred practices set out in instruction manuals and other regulatory procedures combined with the experience of the personnel. These procedures are then tested before being accepted as a matter of work policy and then displayed at appropriate places across the data center and also training sessions could be conducted which aim to drill these procedures into the workers. Again this training can either be in-house or can be done by external vendors who cater to such professional training.

Summary

Hence we see that running a data center is not only about taking care of the purely technical subject matters but also the data center should comply to various policies, procedures, regulations and guidelines which have been laid out by different authorities relevant to their sphere of influence. The data center management should ensure that the maximum possible regulations are being adhered to so that there is least risk of downtime which is important for the data center industry.

Saturday, March 21, 2009

The State of Today’s Data Center: Challenges and Opportunities

Written by Marty Ward and Sean Derrington

Data center managers are caught between a rock and a hard place. They are expected to do more than ever—including protecting rapidly expanding volumes of data and a growing number of mission-critical applications, managing highly complex and wildly heterogeneous environments, meeting more challenging service level agreements (SLAs), and implementing a variety of emerging “green” business initiatives.

And, they are expected to do it with less than ever—including fewer qualified staff and less-than-robust budgets. In fact, according to the 2008 State of the Data Center survey conducted by Applied Research, reducing costs is by far the highest key objective of data center managers today, followed by improving service levels and improving responsiveness. In other words, IT organizations are indeed laboring to do more with less.

The good news? A growing number of creative data center managers are using a variety of cost-containment strategies that capitalize on heterogeneity to increase IT efficiency and maximize existing resources while keeping costs under control. At the foundation of these solutions is a single layer of infrastructure software that supports all major applications, databases, processors, and storage and server hardware platforms.

By leveraging various technologies and processes across this infrastructure, IT organizations can better protect information and applications, enhance data center service levels, improve storage and server utilization, manage physical and virtual environments, and drive down capital and operational costs.

Increasing IT Efficiency

In IT organizations around the world, staffing remains a challenge. According to the State of the Data Center report, 38 percent of organizations are understaffed while only four percent are overstaffed. Moreover, 43 percent of organizations report that finding qualified applications is a very big issue—a problem that is exacerbated when dealing with multiple data centers.

While 45 percent of organizations respond by outsourcing some IT tasks, a number of equally effective alternatives are also available. The most common of these strategies, used by 42 percent of organizations, is to increase automation of routine tasks. This not only reduces costs but also frees IT to address more strategic initiatives.

Storage Management

A growing number of heterogeneous storage management tools automate daily and repetitive storage tasks, including RAID reconfiguration, defragmentation, file system resizing, and volume resizing. With advanced capabilities such as centralized storage management, online configuration and administration, dynamic storage tiering, dynamic multi-pathing, data migration, and local and remote replication, these solutions enable organizations to reduce both operational and capital costs across the data center.

Furthermore, agentless storage change management tools are emerging that enable a centralized, policy-driven approach to handling storage changes and configuration drift to help reduce operational costs while requiring minimal deployment and ongoing maintenance effort.

High Availability/Disaster Recovery

High availability solutions such as clustering tools can also streamline efficiency by monitoring the status of applications and automatically moving them to another server in the event of a fault. These high availability solutions detect faults in an application and all its dependent components, then gracefully and automatically shut down the application, restart it on an available server, connecting it to the appropriate storage devices, and resuming normal operations.

For disaster recovery purposes, these clustering tools can be combined with replication technologies to completely automate the process of replication management and application startup without the need for complicated manual recovery procedures involving storage and application administrators. These high availability and disaster recovery solutions also ensure increased administrator efficiency by providing a single tool for managing both physical and virtual environments.

Data Protection

Next-generation data protection can also be used to reduce the operational costs of protecting and archiving data as well as to meet internal SLAs and external governance requirements. With automated, unified data protection and recovery management tools that are available from a single console and work across a heterogeneous physical and virtual environment, organizations can maximize IT efficiency. A number of these tools provide for additional efficiencies through capabilities such as continuous data protection, advanced recovery of critical applications, data archiving and retention, and service-level management and compliance.

Maximizing Resources

In addition to containing costs through increased IT efficiency, organizations are also implementing a variety of technology approaches—from virtualization and storage management to high availability tools and “green IT” practices—to make better use of existing hardware resources.

Virtualization

Server and storage virtualization can be used to improve utilization of existing hardware, thereby obviating the need to buy additional resources. According to the State of the Data Center survey, 31 percent of organizations are using server virtualization and 22 percent are using storage virtualization as part of their cost-containment strategies.

Of course, because virtualization introduces complexity into the IT infrastructure, organizations looking to fully realize the benefits of this technology while driving down capital costs are advised to also implement a management framework that provides architectural flexibility and supports multiple virtualization platforms as well as physical environments.

Storage Management

While storage capacity continues to grow, storage is often underutilized. To make better use of storage resources, organizations can leverage storage management technologies. Storage resource management (SRM), for example, enables IT to gain the visibility into their storage environment, understand what applications are connected to each storage resource and exactly how much of the storage is actually being used by the application. Once this level of understanding is obtained organizations can make an informed decision about how to reclaim underutilized storage and be used to predict future capacity requirements. 71% of respondents indicated they are exploring SRM solutions.

In addition, thin provisioning can be used to improve storage capacity utilization. Storage arrays enable capacity to be easily allocated to servers on a just-enough and just-in-time basis.

High Availability

Clustering solutions that support a variety of operating systems, physical and virtual servers, as well as a wide range of heterogeneous hardware configurations provide an effective strategy for maximizing resource utilization. With these solutions, IT can consolidate workloads running on underutilized hardware onto a smaller number of machines.

Green IT Practices

Among the various strategies for meeting green IT directives are server virtualization and data deduplication. Data deduplication can decrease the overhead associated with holding multiple copies of the same data by identifying common data and reducing copies to a single entity. This, in turn, can have a dramatic impact on the amount of disk storage required for archiving purposes as well as the number of disks required for backup purposes. Seventy percent of respondents indicated they are considering implementing data deduplication in their efforts to maximize storage efficiency.

The challenges data center managers face today will likely continue as they are called upon to help their organizations meet budgetary requirements while delivering critical services with fewer personnel and limited IT resources. By leveraging technologies and processes that increase IT efficiency and maximize existing resources, IT can effectively do more with less now and into the future.

Friday, March 20, 2009

A Security Experts Guide to Web 2.0 Security

Written by Roger Thornton & Jennifer Bayuk

Web 2.0 has made the Web a livelier and friendlier place, with social Web sites, wikis, blogs, mashups and interactive services that are fun as well as useful. There are two Web 2.0 concepts that change the game for CISOs, and that they need to understand.

The first is the introduction of rich client interfaces (AJAX, Adobe/Flex) while the other is a shift to community controlled content as opposed to publisher consumer model. Both have serious security issues.

It’s all good news about Web 2.0, right?

Yes, unless you happen to be responsible for securing the Web 2.0 environment for your business or enterprise. Then, you might just lament that we’ve taken the data-rich server model of the 1970’s and grafted it onto the interface-rich client model of the 1980’s and 90’s, giving us more capabilities but also a more complex—and vulnerable—computing environment.

We have to deal with the problems traditionally encountered using interface-rich clients—viruses, Trojans, man in the middle attacks, eavesdropping, replay attacks, rogue servers and others. And all of these apply to every interface in a Web 2.0 mashup, which could have dozens of clients in one application.

In addition, the user community has changed from being simply indifferent to being willfully ignorant of the value of information. Users willingly post the most revealing details about their employers and their professional lives (not to mention their personal lives) on MySpace, Facebook, LinkedIn and Twitter—information that is easily available to just about anyone.

The problem is painfully obvious for the security professional: More complexity and openness creates vulnerabilities and opportunities for attack and the release of confidential information. This all results in more headaches for security professionals who have to be vigilant in order to keep their IT environments secure.

What’s a CISO to do?
Although some companies have tried all options, you can’t easily write your own browser, isolate your users from the Web, or control everything that happens on their PC desktop. However, there are steps you can take that can seriously improve your odds of winning the battle over Web 2.0 vulnerabilities.

For community controlled content:
1. Educate yourself and your company, developers, vendors and end users about Web 2.0 vulnerabilities. Institute a clearing process for the use and inventory of new Web 2.0 components before they are incorporated into your business environment.
2. Segregate users’ network access for those who need and those who don’t need access to social networking sites.
3. Establish a policy identifying inappropriate professional topics for public discussion on the Web or through online social services.
4. Create desktop policies and filters that block, as much as possible, interactions with unknown and untested software.

When deploying rich client interfaces:
5. Assign a cross-functional team to work with software development and application owners to educate themselves on the risks of incorporating Web 2.0 components into applications. Have your own developers recognize and control the use of potentially vulnerable tools such as ActiveX and JavaScript.
6. Require your vendors to meet secure coding standards.
7. Vigorously stay on top of vulnerabilities and exploits. Use your Web 2.0 inventory to establish a quick response plan to mitigate software as issues arise.

Thursday, March 19, 2009

Top Ten Data Security Best Practices

Written by Gordon Rapkin

1: Don’t narrow security focus during economic downturns
When IT budgets are slashed it’s tempting to concentrate only on achieving compliance with regulatory requirements in order to avoid fines, other sanctions and bad publicity.

The problem is that centring security solely on meeting the bare minimums required to be in compliance ensures that critical data is not secured as comprehensively as it should be. Gambling with data security in a downturn is a particularly risky business -- financial pressures logically lead to an increased threat level from those who are hoping to profit from purloined data. Companies should, even in difficult times, work towards comprehensive security rather than simple compliance with regulations.


2: Have a clear picture of enterprise data flow and usage
You can't protect data if you don't know where it is. Comprehensive audits typically reveal sensitive personal data tucked away in places that you’d never expect to find it, unprotected in applications and databases across the network. Conduct a full audit of the entire system and identify all the points and places where sensitive data is processed and stored. Only after you know where the data goes and lives, can you can develop a plan to protect it. The plan should address such issues as data retention and disposal, user access, encryption and auditing.

3: Know your data
If the enterprise doesn’t classify data according to its sensitivity and its worth to the organisation it’s likely that too much money is being spent on securing non-critical data. Conduct a data asset valuation considering a variety of criteria including regulatory compliance mandates, application utilisation, access frequency, update cost and competitive vulnerability to arrive at both a value for the data and a ratio for determining appropriate security costs. Specifically gauge the risk associated with employees and how they use the data. If staff are on a minimum wage, transient and/or have low security awareness, the data may be worth more than their pay, so the risk goes up.

Usage also impacts on the level of security required. If the data only exists on isolated systems behind many layers of access control, then the risk may be lower and the security may be more modulated.

4: Encrypt data end-to-end
Best practices dictate that we protect sensitive data at the point of capture, as it's transferred over any network (including internal networks) and when it is at rest. Malicious hackers won’t restrict themselves to attacking only data at rest, they’re quite happy to intercept information at the point of collection, or anywhere in its travels. The sooner encryption of data occurs, the more secure the environment.

5: Regulation is not a substitute for education
Technology controls should certainly be in place to prevent employees from intentionally or mistakenly misusing data. But it’s important that everyone understands the reasons for the data protection measures which are in place. One of the most positive steps an enterprise can make is to institute ongoing security awareness training for all employees to ensure that they understand how to identify confidential information, the importance of protecting data and systems, acceptable use of system resources, email, the company's security policies and procedures, and how to spot scams. People who understand the importance of protecting data and who are given the tools that help them to do so are a great line of defence against malicious hackers. The other side of this coin is that people will always find a way to thwart security measures that they don't understand, or that impact negatively on their productivity.

6: Unify processes and policies
Disparate data protection projects, whether created by design or due to company mergers, almost always result in a hodge-podge of secured and unsecured systems, with some data on some systems encrypted and some not, some systems regularly purged of old data on a monthly basis and others harbouring customer information that should have been deleted years ago. If this is the case within your enterprise, consider developing an enterprise-wide unified plan to manage sensitive data assets with the technologies, policies and procedures that suit the enterprise’s business needs and enable compliance with applicable regulations and standards.

7: Partner responsibility
Virtually all data protection and privacy regulations state that firms can’t share the risk of compliance, which means that if your outsourcing partner fails to protect your company's data, your company is at fault and is liable for any associated penalties or legal actions that might arise from the exposure of that data. Laws concerning data privacy and security vary internationally. To lessen the chance of sensitive data being exposed deliberately or by mistake, you must ensure that the company you are partnering with — offshore or domestic — takes data security seriously and fully understands the regulations that affect your business.

8: Audit selectively
Auditing shouldn’t be a huge data dump of every possible bit of information. To be useful it should be selective. Selective, granular auditing saves time and reduces performance concerns by focusing on sensitive data only. Ideally, the logs should focus on the most useful information for security managers; that is, activity around protected information. Limiting the accumulation of audit logs in this way helps to ensure that all critical security events will be reviewed.

9: Consider physical security
It seems that every week we hear about the laptop that was left behind in a cab, the DVD disks that were found in the rubbish, the unencrypted backup tapes that showed up sans degaussing for sale on eBay, the flash drive that was used to steal thousands of documents, etc. Doors that lock are as important to security as threat intrusion software. Always consider 'what if this ______ was stolen?' No matter how you fill in the blank, the question elicits a strategy for physical security.

10: Devise value-based data retention policies
Retaining sensitive data can be very valuable for analytic, marketing and relationship purposes, provided it is retained in a secure manner. Make sure that stored data is really being used in a way that brings real benefits to your organisation. The more data you save, the more data you have to protect. If securely storing data is costing more than its value to your organisation, it's time to refine your data retention policy.

Wednesday, March 18, 2009

Seven Things to Improve ITPA implementation

Written by Travis Greene

What are you doing to prepare for the next big thing in IT management? Alright, that question is a bit unfair. You likely have all you can handle, dealing with projects and ongoing operations, and making it hard to focus on the next big thing.


And what is this next big thing anyway? Sounds like vendor hype. While a debate about the next big thing could certainly include topics as diverse as how to manage IT services deployed in the cloud to heterogeneous virtualization management, it would seem appropriate to include the growing movement towards IT Process Automation (ITPA).

ITPA is gaining attention because it simultaneously reduces costs and improves IT service quality across a broad range of IT disciplines. In general, automation brings about a reduction in manual labor (the highest cost element in IT management), and reduces the potential for human error (with an associated improvement in service quality and availability). While introducing process manually can boost efficiency, it also has a tendency to increase costs when factoring in the documentation overhead that is required. Whether there is a need to automate simple, discrete tasks or broader cross-discipline processes, ITPA is one of those rare technologies that offers compelling value for both.

Real needs that drive ITPA implementations

With any new technology, it helps to have examples of real value that has been obtained by organizations implementing it. In preparing this article, the following organizations contributed ideas. Listed are the needs that initially drove them to adopt ITPA.

  • Management Service Provider – Improve efficiency, measured by the ratio of servers to operations personnel, and quality of service delivery for customer-specific processes by automatically handling complex application problems, reducing the chance for human error.
  • Energy Utility – Perform job scheduling tasks to ensure on-time and consistent execution, as well as reduce the manual labor associated with starting and monitoring jobs.
  • Financial Services Company – Create user self-service processes to reduce calls and workload at the help desk.
  • Healthcare Organization – Correlate events from monitoring tools and automate event response to reduce the cost of event management and minimize downtime.
  • Large US Government Agency – Provision and manage virtual machines to ensure proper authorization and reduce virtual machine sprawl.


Seven lessons that will improve your ITPA implementation

Early adopters can reap big benefits (such as the attention of vendors seeking to incubate the technology) but will also make mistakes that prudent organizations will seek to learn and avoid. Fortunately, as ITPA approaches mainstream adoption, these lessons are available from those who have gone before. Here are seven lessons learned from the organizations profiled above.

1. Get started with three to five processes that result in quick wins

Why three to five? Some may not work out like you planned, or take longer to implement than originally anticipated. Moreover, demonstrating ROI on the ITPA investment will, in most cases, take more than one process. But trying to implement too many processes in the early days of implementation can dilute resources, resulting in delays as well.

The ideal processes that qualify as “quick wins” are focused on resolving widely-recognized pains, yet do not require buy-in or integration across multiple groups or tools in the IT organization. Deploying a new technology can expose political fractures in the organization. For all of the organizations listed above, their first processes did not require approvals or use outside of a specific group; yet because they targeted highly visible problem areas, they were able to justify the implementation costs and offer irrefutable evidence (real business justification) for continued deployment of IT Process Automation.

2. Identify the follow-on processes to automate before you start

While the first three to five processes are critical, it’s a good idea to consider what will be the second act for your ITPA implementation, even before beginning the first. This allows momentum to be maintained, because as the details of implementation consume your attention, the focus will be there rather than on what is ahead, and at some point you will run out of processes ready to be implemented. A better approach is to stimulate demand by showing off the early results and generate a queue of processes to be automated.

The risk of not taking this step is that the deployment will stall and ROI will be limited. With ITPA technologies you can generate positive ROI from the first few processes that you automate, however, significant additional ROI can be generated from each new process that is automated, especially once the infrastructure and expertise is already in place. This advice may seem obvious, but almost all of the organizations listed above have fallen into the trap of losing momentum after the initial deployment, which could have been avoided with advanced planning.

3. Consider integration requirements, now and in the future

ITPA technologies work by controlling and providing data to multiple other tools and technologies. Therefore, the best ITPA tools make the task of integration easy, either through purpose-built adapters or customizable APIs. Obviously, the less customized the better, but consider where the ITPA vendor’s roadmap is going as well. Even if coverage is sufficient today, if you decide to introduce tools from different vendors later, will they be supported?

4. Pick the tool that matches both short and long-term requirements

Besides the integration requirements listed in lesson #3, the ITPA technology options should be weighed with a long and short-term perspective. For example, some ITPA tools are better at provisioning or configuration management; others are better at handling events as process triggers. While meeting the initial requirements is important, consider which tool will provide the broadest capabilities for future requirements as well, to ensure that additional ROI is continuously achievable.

5. Get buy-in from the right stakeholders

One challenge commonly seen when selecting an ITPA technology is paralysis through analysis. This is often due to the fact that too broad of a consensus is needed to select a solution. Because ITPA technology is new, there is confusion in the marketplace over how to interpret the difference between vendor products. There is also confusion as to how this new technology can replace legacy investments such as job scheduling tools, to provide IT with greater value and functionality. Early adopters proved that the resulting overly-cautious approach ultimately delayed the ROI that ITPA offers.

What may not be as intuitive is the need to obtain buy-in from administrators, who will perceive ITPA as a threat to their jobs and can sabotage efforts to document processes. Involve these stakeholders in the decision process and reassure them that the time they save through automation will be put to better use for the business.

6. Dedicate resources to ensure success

Personnel are expensive and one of the critical measures of the ROI potential in ITPA is how much manual labor is saved. So it would seem counter-intuitive to promote dedicated resources to building and maintaining automation on an ongoing basis. Yet, without the expertise to building good processes, the ROI potential will diminish.

Some organizations have dedicated 20 to 40 percent of a full time employee (one or two days per week) towards working on automating processes, to get started. Once the value has been established, dedicating additional time has proven to be relatively easy to justify.

7. Calculate the return on investment with each new process automated

Much has been made of ROI in the lessons listed above. In today’s macro-economic environment, a new technology purchase must have demonstrable ROI to be considered. ROI is generally easy to prove with ITPA, which is driving much of the interest. To calculate it, you need data to support the time it takes to manually perform tasks and the cost of that labor time. Then you need to compare that to the percentage of that time that can be saved through automation. Note that very few processes can be 100 percent automated, but there can still be significant value in automating even as little as 50 percent of a process. Perform the ROI analysis on every single process you automate. Each one has its own potential, and collectively, over time, can produce startling results.

Proceed with confidence

Although it may be new to you, ITPA has been around for years and has reached a level of maturity that is sufficient for most organizations that would normally be risk adverse and unable to invest in leading edge technologies. Taking these lessons (and learning new ones in online communities) is one way to improve your chances for success. Now is the time to enjoy the return on investment that ITPA can offer.

About the Author: Travis Greene is Chief Service Management Strategist for NetIQ. As NetIQ’s Chief Service Management Strategist, Travis Greene works directly with customers, industry analysts, partners and others to define service management solutions based on the NetIQ product and service base. After a 10-year career as a US Naval Officer, Greene started in IT as a systems engineer for an application development and hosting firm. He rose rapidly into management and was eventually promoted to the National Director of Data Center Operations, managing four data centers nationwide. In early 2002, a Service Provider hired him to begin experimenting with the ITIL framework to improve service quality. Greene was a key member of the implementation and continuous improvement team, funneling customer feedback into service improvements. Through this experience and formal training, he earned his Manager's Certification in IT Service Management. Having delivered a level of ITIL maturity, Greene had a desire to bring his experience to a broader market and founded a consulting firm, ITSM Navigators, where his team specialized in ITIL implementation consulting for financial corporations. Greene possesses a unique blend of IT operations experience, process design, organizational leadership and technical skills. He fully understands the challenges IT faces in becoming more productive while improving the value of IT services to the business. He is an active member of the IT Service Management Forum (itSMF) and is a regular speaker at Local Interest Groups and national conferences. Greene is Manager Certified in ITIL and holds a BS in Computer Science from the US Naval Academy.

Tuesday, March 17, 2009

Enterprise Application Skills

Written by Tsvetanka Stoyanova

Enterprise application skills is not new to IT and there are hundreds of thousands, if not millions of IT pros, who specialize in this area. For many people enterprise application skills are just one of the many areas they have some basic knowledge of, while for others, enterprise application skills are the core competency and a life-time career.

If you belong to the second group, then there is good news for you – the demand for enterprise application skills is not only steady – it is increasing.

Maybe you are asking yourself: should I get more training in that direction? If you don't have a solid background in enterprise applications, it will require plenty of off the job training. Enterprise application technology is hardly something one can learn overnight and it is an area, like many other IT areas, where beginners are not tolerated. There is a demand for experienced enterprise application experts with years of experience. Quite simply, enterprise applications are not a beginner-friendly area, but the demand is high for them.

There Is No Recession for Enterprise Application Skills?

It might sound strange that the demand for enterprise application skills is increasing now, when IT budgets are shrinking and news of layoffs and bankruptcy are flooding from all directions. During the previous recession, in the beginning of this century, networking skills were in demand (at least according to some major industry analysts), now it is enterprise application skills' turn.

The demand for enterprise application skills goes up and down in drastic shifts of demand. Several years ago IDC reported a decrease in the demand for application enterprise skills, while now, as surprising as is might sound, the wave is going up. The fact of the matter is that demand is now up for these skills and the pool is shallow.

SAP skills have grown between 25% and 30% in value in recent months. Also hot are unified messaging, wireless networking, PHP, XML, Oracle, business intelligence and network security management skills. And over the past year, SANs, VoIP and virtualization posted pay gains.

SAP has so many products that even if somebody decides to spend his or her life studying them all, it is still impossible, so it is useful to know which of them are the leaders. This article, which deals with payment rises for SAP professionals, sheds some light on the topic which areas are the most lucrative.

It is obvious that the differences in payment between different SAP products are drastic - starting from SAP Materials Management with a 57.1 percent increase, to the 25% drop for SAP Payroll – so it is certainly not precise to say that all SAP experts are well paid.

Another major category of enterprise application skills, which still sells is Windows enterprise application skills. This is also hardly surprising – Windows is the dominating operating system in enterprises and companies need people to maintain it.

There is a clear demand for professionals in the enterprise application arena. The demand will continue to increase as other technologies such as cloud computing become widely used as many predict.

Saturday, March 14, 2009

Proper Sizing of Your Generator for your Data Center

Written by Rakesh Dogra

Data centers are popping up around the world. These facilities that we rely on to process and store our information are becoming more and more critical every day. The high criticality of these facilities is creating a spike in data center availability requirements.

Of course there are several factors which could lead to a disruption in the services provided by a data center but one main factor is the grid power failure which could bring the entire system to a halt, unless necessary provision is made for back up.

Such a provision exists in the form of flywheels, UPS and battery back up but these are only sufficient for a relatively short duration of time ranging from few seconds to few minutes at the most. Moreover these systems only provide power to the critical IT equipment and not to the secondary systems including cooling. Back up or standby generators are a must if a data center has to ensure long term reliability and provide sufficient back up power which could last a few hours or even a couple of days if circumstances so require.

Selecting the back up generator of the proper size and power rating is of utmost importance to ensure that the generator is able to cope up with the demand when it is actually required from it.

The calculation of the total power required for a data center is basically a simple procedure and involves adding up the power rating of all the equipment which consumes electrical energy. This includes IT and cooling equipment. Of course all the loads may not be working simultaneously during actual operation at all times, but it is always advisable to have a provision to handle peak loads with the generator since it represents the worst case scenario and takes care of the maximum load situation at any given time in case of grid power failure.

It must also be remembered that though the load requirements for IT related equipment can be found from simple addition of the power ratings of the different equipment, the same is not true about machineries such as electric motors. An electric motor draws a much higher current during the initial starting phase and finally settles down to its normal rated value after it has attained sufficient speed. Hence the total number of motors and their power rating plays an important part in determining generator size.

Provision must be kept for a situation wherein all motors are started simultaneously and hence consume several times more power than their combined rating for their starting period and this is the load which the back up generator should be able to handle without much fuss.

Moreover data centers normally tend to grow in their capacity over a period of time with the growth of the company. This in turn means a rise in the power and cooling requirements of the data center. Of course there may not be a magic formula for calculating the given power requirements for a certain time in future, a rough estimate should be available regarding future expansion based on company plans and industry trends. The generator should be able to cope up with this rise in demand in the future and hence its rating should be somewhat higher than the maximum peak load calculated previously.

Another factor to be kept in mind is that generators consume fuel and the bigger the size, the more that fuel is consumed. Hence an optimum balance also needs to be struck between the generator size requirements and fuel efficiency. For example let us take a hypothetical example in which the power requirement is estimated at 50 KW but the normal load is around say 15 KW. This means that the generator would be running at a much lower load than its rated power which has two disadvantages.

Firstly since most generators are diesel engine operated, their efficiency is quite low at low loads and secondly large amount of fuel will go to waste for the relatively lesser amount of power that is required. For these reasons in actual practice in large modern day data centers, a single generator is not feasible to handle all the power requirements; hence companies use an array of generators which can provide the necessary power. For example Google has installed more than three dozen generators in their Iowa Data Center.

Apart from choosing generators of the right size and capacity it is also important that the generators are kept well maintained and serviced at the appropriate intervals. This interval is either in the form of calendar time or running hours as specified by the manufacturer and this schedule should be strictly adhered to. Routine operation of standby generators is necessary to ensure that they start without problem during an actual emergency and when running, they should be properly monitored for their parameters to get any indication of a possible fault.

Testing of the back up generator also has different levels and though some data centers might be happy with starting them once a week, running for some time and shutting them down, other companies might recommend drills which help to ensure and monitor the real availability of these generators during times of need. The grid power in such a case is deliberately cordoned off so that the reaction of the generators and automatic transfer switches (if present) can be seen. But in actual practice data center managers do not like to follow this practice as they shudder at the thought of a possible loss of availability of their data center.

Hence we have seen that the back up generator is a necessary piece of machinery which every data center should have of the right size and ratings. Their installation helps to ensure that the various services provided to clients are continued interrupted despite any failure in grid power for whatsoever reasons.

Friday, March 13, 2009

Virtual Infrastructure Management

Written by Stephen Elliot

With its ability to reduce costs and optimize the IT infrastructure by abstracting resources, virtualization has become an increasingly popular tactic for enterprises having to compete in an ever more challenging global economy. According to a recent study CA conducted with 300 CIOs and top IT executives, 64 percent of respondents say they've already invested in virtualization, and the other 36 percent reported that they plan to invest in virtualization.

You don't have to look very hard to find the biggest reasons for virtualization's widespread adoption: cost savings and IT agility improvements. Because of the current global economic crisis, CIOs are being asked not only to do more with less, but also to do it with lower headcount while delivering higher IT service levels. By wringing out more performance without adding huge IT infrastructure line items to the budget, virtualization provides the more-bang-for-less-buck solution that organizations are looking for. Increasingly, enterprise IT organizations want to host more critical workloads on virtual machines; however the management risks must be reduced.


Respondents to CA's study stated they are also implementing virtualization for technical reasons like easier provisioning and software deployment. But although virtualization brings a tremendous opportunity for IT organizations to compress the processes and cycle times between production and application development teams, drive out more agility in the infrastructure and automate more processes, it brings a lot of complexity to the expertise needed to run the software and the management processes that must be tweaked and adjusted.

The Challenges Facing Virtual Infrastructure Management

One major technical issue facing organizations looking to add virtualization to their IT infrastructure is the limitations of system platform tools. As the virtual machine count begins to creep up, platform tools can't provide the amount of granular performance data necessary to give the IT staff a complete picture of what's going on.


Couple that with the heterogeneous mix of virtualization platforms companies are using and management challenges begin to have an impact on IT's ability to accelerate the deployment of virtual machines. The bottom line is that both platform management and enterprise management solutions are required to deliver an integrated business service view of both physical and virtual environments.

Similarly, organizations need the ability to integrate the physical infrastructure with its virtual counterpart in order to automate configuration changes, patch management, server provisioning and resource allocation. The key business service outcomes of this are lower operations costs, improved ROI from virtualization deployments, and an end-to-end view of an IT service.
CIOs are also looking at virtualization for more than just cost savings. They're looking for a management solution that will transform their IT organizations and demonstrate success via measurable metrics and key performance indicators, whether they're business processes such as inventory churn and increasing margins, or technical metrics like server-to-admin ratio (or virtual-machine-to-admin ratios), or even a reduction in the number of trouble tickets sent to the service desk. The goal is to deliver business transformation in an ongoing, measurable manner to mitigate the business risk of a growing virtualization deployment.


The Solution: Virtualization as Strategy, Not Just Tactic

All of these challenges point to a common solution that transforms the deployment of virtualization from being an ad hoc cost-savings tactic to a more strategic enterprise platform. Rather than merely increasing the number of virtual machines, IT can take the opportunity to think about how it can get the most out of decompressing the processes between teams, increasing the workflow automation, reducing handoff times, reducing configuration check times and increasing compliance. These are the foundational steps that lead to IT transformation and successful business service outcomes. Without these capabilities, the failure rate of projects and associated costs substantially increase.

Where Virtual Infrastructure Management Is Headed

Another reason that viewing virtualization as an enterprise platform is becoming crucial to organizations is that virtual machines are taking on different forms as virtual technology transforms. The management of desktop virtualization is becoming increasingly important as the technology increases in popularity. One particular challenge is the number of different architectures that needs to be taken into consideration for any desktop virtualization solution.


Likewise, a growing number of organizations are investigating network virtualization. In particular, Cisco's new virtual switch technology, which includes embedded software from VMware, has been making ripples across the IT world.

Having an enterprise platform in place makes such new developments in virtualization easier to implement and manage. The better an organization plans for the management, processes and chargeback opportunities virtualization offers, the more IT can lead the business outcome discussion and drive out measurable success.

While virtualization has already helped transform data centers, drive consolidation efforts and reduce power and cooling costs, we've just scratched the surface. There's a lot more to go.

Thursday, March 12, 2009

How To Become Productive At Work (Part I)

Each day starts with best of intentions. There are deadlines to meet, essential work to be finished, important business meetings & phone calls and short and long-term projects to be started. As the day comes to a closure and we are wrapping up to leave, we discover that barely a fraction of what we had on our to-do list has been accomplished. As a result we make a mental note to come in early the next day, stay late, and work at weekends as well. Yes, we are busy, but are we productive?

A professional is hired for one reason, i.e., he demonstrates the potential to be productive at work. So now you are there in a cubicle, facing a computer with the expectation that you will do something good for the company. Do you feel like being stuck at work sometimes? Would you like to be more productive and feel a greater sense of accomplishment at the end of each day? Well you can. It just takes a desire and commitment to renew your habits and routines.

A productive environment leads to productive employees. The article below is divided into two parts. This week we will give an insight on why a productive environment is necessary to motivate and make employees industrious at work while next week we will focus on how can employees themselves inculcate productivity in their profession.

Why is a productive environment necessary?

Employees produce good results when their managers treat them well and the organization pays special attention to their professional needs. So the question arises: What do most talented, productive employees need from a workplace?
Good managers recognize employees as individuals and do not treat everyone at a collective level. They don’t try to “fix” people and their weaknesses; instead, they excel at turning talent into performance. The key to productivity is to make fewer promises to your employees and then strive to keep all of them.


What does a great workplace look like? Gallup took the challenge and eventually formulated the following questions:

The Twelve Questions to Measure the Strength of a Workplace:

  1. Do I know what is expected of me at my job?
  2. Do I have the materials and equipment I need to do my work right?
  3. Do I have the opportunity to do what I do best everyday?
  4. In the past seven days, have I received recognition or praise for doing well?
  5. Does anybody at my job place seem to care about me as a person?
  6. Is there anyone, may it be a supervisor or a colleague, who encourages my development?
  7. Do my opinions seem to count at my workplace?
  8. Does the mission/purpose of my company make me feel that my job is important?
  9. Are my co-workers committed to accomplishing excellence while performing their job responsibilities?
  10. Do I have a best friend at the organization I’m an employee of?
  11. Has someone at work talked to me about my progress in the last six months?
  12. This last year, has my job given me an opportunity to learn and grow?

The results yielded that the employees who responded positively to the 12 questions worked in business units with higher levels of productivity, profit, employee retention and customer satisfaction. It was also discovered that it is the employees’ immediate manager, and not the pay, benefits, perks or charismatic corporate leader, who plays the critical role in building a strong workplace. So it implies that people leave managers, not companies. This means that if your relationship with your immediate manager is fractured, no amount of company-sponsored daycare will persuade you to stay and perform.

Relationship between managers, employees & companies:

According to the Gallup survey:
  • A bad manager can scare away talented employees, hence, draining the company of its power and value. The top executives are often unaware of what is happening down at the frontlines.
  • An individual achiever may not necessarily be a good manager; companies should take care not to over-promote.
  • Organizations should hold managers accountable for employees’ response to these 12 questions.
  • They should also let each manager know what actions to take in order to deserve positive responses from his employees because an employee’s perception of the physical environment is colored by his relationship with his manager.

Bring out the best:

The Great Manager Mantra is: People don’t like to change that much. Don’t waste time trying to put in what is left out. Try to draw out what is left in.

Managers are catalysts:

As a catalyst, the manager speeds up the reaction between the employee’s talents and the achievement of company’s goals and objectives. In order to warrant positive responses from his employees, a manager must:
  1. Select a person
  2. Set expectations
  3. Motivate the person
  4. Develop the person

Why does every role, performed at excellence, require talent?

Great managers define talent as “a recurring pattern of thought, feeling, or behavior that can be productively applied”, or the behavior one finds oneself doing often. The key to excellent performance is: matching the right talent with the required role to be played.

“Excellence is impossible to achieve without natural talent.”

Every individual is unique and everyone has his/her own personality accompanied by a dignity and self respect to go with it. Without talent, no amount of new skills or knowledge can help an employee in unanticipated situations. In the words of great managers, every role performed at excellence deserves respect; every role has its own nobility.

Comfortable environment:

In today’s competitive corporate world, it is becoming increasingly important to focus on the appearance of the workplace. With a mounting number of people spending more time in their offices, the physical comfort, visual appeal and accessibility of their workplace has gained ever more importance. Wouldn’t it make far better sense to retain valuable employees by making small, yet meaningful, aesthetic adjustments to their work environments?

Studies have shown that employers, who care about their employees and their work environment, have fashioned more motivated and productive people. There is a strong relationship between motivation and productivity at the workplace. Employees who are inspired will be more diligent, responsible and eventually, more industrious.

Well lit, airy & clean:

Employees spend 6 to 8 hours at their workplace every day which makes a workplace their second home. It is up to the employers to see and make sure that the office is fully facilitated and is in good working order. It must be well lit and well ventilated with the right amount of lights, fans, air-conditioning. Cleanliness is of utmost importance as there are a huge number of workers working at a job place. The offices, cubicles, rest area, washrooms, kitchen & serving area must be neat and clean. The more comfortable the working environment is more productive will be the employees.

Safety measures:

An employer must make sure that he provides a safe environment to his/her employee. The security measures outside office include security guards and parking facility. While inside the office, there must be introduced a safe environment for male and female employees to work so that if an employee has to work late hours she/he should feel safe and comfortable working in his/her office. There must be no discrimination or harassment practiced and the employee should be given equal opportunity to grow as an individual despite being male or female.

The power of recognition:

Acknowledgment is a powerful motivator. If you praise your employees and acknowledge their efforts they will feel better about themselves and about the hard work they have put in.

The saga of raise:

Sometime back it was believed that a “salary increase” is the most obvious tool for encouraging employees to work hard. Today several studies have discredited the idea. Employees do not become more productive simply because they are paid more. After all, employees do not calculate the monetary value of every action they perform. Studies show that while a raise makes employees happy, there is an abundance of other things that can accomplish the same thing.

The power of praise:

A pat on the shoulder can produce wonders. For effective management, a manager must recognize that fairness and leadership alone cannot inspire his staff to work hard. Deep inside all of us, we crave for being appreciated. Praise is an affirmation that an employee did something right, and every time he receives compliments in the workplace he pushes harder to receive the same avowal the next time around.

The importance of incentives:

Incentives even with no monetary value are just as important as praise. Incentive can be categorized as, praise with a physical form. It is actually a reward for a job well done. Managers tend to ignore the importance of non-monetary incentives while these have been found to dramatically increase employee’s sense of worth in relation to actual work accomplished. They could be company logo mugs or shirts or business card holders, no matter what you decide to give to your employees as an incentive, never lose sight of the need to recognize their efforts, whether verbally or through small office gifts.

Wednesday, March 11, 2009

Desktop Virtualization – Has it hit your desk yet?

Written by David Ting

The discussion on desktop virtualization, or hosted virtual desktop, is heating up. Some view it as futuristic. Others say it is throwback to the world of mainframe computing. With economic concerns forcing businesses to take a hard look at expenses across the enterprise, however, there are many reasons this is such a hot topic.

In our current cost conscious world, the potential to reduce IT costs are obvious: virtualization significantly reduces the need for idle computing hardware and drastically lowers power consumption - especially in mission critical environments like healthcare where machines need to be on 24 hours a day. Lower power consumption comes from reducing the need to run lightly loaded but high powered CPUs at each desktop and delivering desktop sessions for multiple users from a server that can be heavily loaded. Most importantly, virtualization frees up IT from having to maintain large numbers of desktop systems that are largely user managed. It also eliminates the need to constantly re-image machines that have degraded through common usage. Imagine how many fewer head aches we would have if we could have a new copy of the OS Image everyday - and not have to suffer through the "plaque" build up that slowly kills performance.

This all sounds good. But, before diving headfirst into the virtualization pool, it's important to realize that the benefits of desktop virtualization also lead to a new security challenges - especially around managing user identities, strong authentication and enforcement of access policies.

With user identities being relevant in multiple points within the virtual desktop , coordinating and enforcing access policies becomes far more difficult and error prone as all the systems have to be in sync. Since one of the advantages of having virtual desktops is the ability to dynamically create desktops specific to the user's role within the organization, having a centralized way to manage user identities, roles and access (or desktop) policies is critical in this new virtualized environment. Allowing users to only access tailored desktops specific to their role or access location can be tremendously valuable in controlling access to computing resources. Being able to leverage a single location for authenticating users, obtaining desktop access rights and auditing session related information is equally important, if not more so, than what we have in a conventional desktop environment.

While it is still some time out before adoption becomes common - security capabilities and limitations present a barrier to adoption - we're beginning to see customers who need to address these issues - connecting the user identity with authentication and policy link all the way from the client to the virtualized session and even to the virtualized application.

Desktop virtualization has tremendous promise - however, until we can replicate the user's current experience --and more importantly--make it easier to set and enforce authentication and policy in this environment, there's still work to be done.

Tuesday, March 10, 2009

Internal Regulations for Data Access

Written by Tsvetanka Stoyanova

Data is very important to every company, organization and government. In our age of computers, data has become as precious as gold. Data wields power, but data misuse can create havoc. Data is difficult to protect. It seems like every month a new media report describes the hacking of data and no one is immune.

The protection around data can appear as solid as steel, but over confidence in your data’s protection is a fool’s path. You can never be 100% sure when your information will be accessed illegitimately.

Take a look at those media reports again and notice that it is not just the small and medium sized companies that are being hacked, but everyone from government agencies to members of the Fortune 500.

Why It Is Important to Have Internal Regulations for Data Access? It is obvious that internal regulations for data access are important. Data misuse is too common to be neglected and it is not the hackers who are to blame, but the lack of a solid security program is also at fault. Data is valuable and if you have data you should take the steps to keep it protected. Data should not only be protected from exterior intruders, but from the interior as well.

Unfortunately, many cases of data theft are inside jobs. Some sources say that 80% of threats come from insiders and 65% of internal threats remained undiscovered! This is scary at best! While you can't suspect that all your employees are criminals, it is mandatory that you have a program in place to monitor internal breaches. In some cases employees are unaware that the information they are gathering is off limits, but in more than half of those cases the employee is unaware of it. It is important to communicate company policies on accessing data to those who have access or a means to easily intrude.

No company wants to make the headlines or become known for internal data theft, insider trading, or leaks of sensitive information. That's why you need to have internal regulations for data access. Most important, make sure that they are followed without exceptions.

Internal Regulations for Data Access

Protecting data involves many steps and some of them are described in the following Data Protection Basics article. However, since internal regulations are an extensive subject we'll deal mainly with them here. The rules to define adequate internal regulations for data access are the basis for your data protection efforts.

The main purpose of any internal regulations program for data access is to prevent intentional and unintentional data misuse by your employees. This can be a difficult task. Let us review some steps that you should consider.

  • Check all applicable regulations and industry requirements for changes and updates. Keeping an eye out for changes is not enough. You should have a good understanding of what each regulation is asking of you. As an example in Europe, many professionals are utilizing the EU Data Protection Directive. This is a good start; however when looking closer at the Directive it only provides general guidance, but not detailed steps. Detailed steps are provided by individual country regulations.
  • Make your employees aware of the risks of unauthorized data access. 99% of data center staff is aware that data is gold and won't misuse it unintentionally. The remaining percentage is what you need to be aware of. While in most cases data theft is intentional, there are cases of leakage, when an employee has been fooled by a third party and as obvious as it may seem, you need to make sure that this never happens. I recall a case, when a software developer, who had just started his first full-time job with a company, was tricked by a “friend” to show the source code of one of the products the company was developing. The thief rebranded the stolen source code and launched it as his own product and began competing with the company he robbed.
  • The minimum privileges rule. In above example, the theft may not have happened if the developer did not have access rights to the source code. It is important to give access sparingly. An employee should only have access to data he or she needs in order to be able to perform his or her daily duties. A process such as this may slow development, but this is tolerable in comparison to losing the information.
  • Classify your data so that you are aware of what is sensitive. There are degrees of sensitivity that need to be classified. Financial and health records should be at the highest tier. Data classification could be an enormous task but once completed updating is all that remains.
  • Define primary and secondary access users. It is good practice to assign primary access and then secondary access in the event something happens to the person who has the first tier access.
  • Physical access. Ensure that your facility has the proper physical security levels. This includes a secure facility with card access entry points, identification badges and security code access to the building.
  • Access to machines and applications. Physical access includes access to premises and machines but very often one doesn't have to have physical access in order to get hold of sensitive data. You also need to define rules for access to machines and the applications on them. Also, think about backups and virtual machines – don't forget to cover them as well. In some cases access restrictions are limited to some period of time only (for time-sensitive data, which after the critical period has expired becomes publicly available), while in others they are for the entire life cycle of the data.
  • Be sure to have a policy in place for ex-employees. Remove access requirements and change codes immediately to avoid theft. Be wary of employees who voice negative statements about the company or those who are disgruntled for any reason.
  • Keep an eye out. As I have mentioned, some sources say that as much as 65% of internal thefts go unnoticed. Keep an eye out for possible violations and investigate them right away.
  • Know who you should contact in the event that you find or see a data breach.
  • Create standard operating procedures (SOPs). The National Institute of Standards and Technology (NIST) have published guidelines for bolstering the response capabilities of enterprises.
  • If hacked, preserve all evidence and have a process in place to do that includes maintaining availability of equipment.
The above mentioned measures are not an all inclusive list. Whether the investigation is internal or external, computer-based fraud and electronic data theft are extremely serious security issues. Whatever the situation, employ a data breach response plan that preserves evidence, helps catch the criminals, and ensures that the enterprise negates any vulnerabilities.

Recent Posts