Tuesday, September 20, 2005

 

Don't trust security to techies alone, Gartner says

Gartner has commented putting IT Professional solely for the security task is just not enough. IT Professional to them is too technical to understand the business needs and directions.

Well in my opinion, they might just be right about been able to cut cost (the usual) by having a faster ROI. Spending less and getting better with stategies.

What they have done is separated security and people which is more or less incorrect. Security do not consists only of Hardwares, people is part of security as well. We have our Security Password, Security IDs and Emails.

Placing firewall at the bottom of the priority list will definitely get them into trouble, if you can't secure your perimeter the battle is as good as lost. Make it worst and get somebody just graduate from colleage to set up the firewall.

Instead, I would suggest not getting business-focused managers (who know nuts about security would usually suggest cost cutting to save their placement) but rather train or bring these technies to the business table and educate.

By shutting them off and getting them listen to you is actually very simple. We call it the blind leading the blind. Thus in order for that to work, let the technie talk first and the management to open up their ears listen (which they usually don't) and digest (I know its little bit difficult).

The management part would then be to explain (not command) the situation or their point of views. Getting both parties understand the scenario would help to reach a compromise solutions.

Having done that the Security professionals will be able to understand the company direction there by bringing the company there.

Source : http://news.com.com/Dont+trust+security+to+techies+alone%2C+Gartner+says/2100-7350_3-5868906.html?tag=cd.top










Wednesday, September 14, 2005

 

The myths of open source

Technology Updates

Who's using open source, why, and are the benefits worth the risks? By Malcolm Wheatley, CIO

Once seen as flaky, cheap and the work of amateur developers, open source has emerged blinking into the daylight. So who's using open source? Why are they using it? And are the benefits worth the risks? The answers are surprising -- and dispel some of the myths surrounding open source.

At first glance, the company Employease seems unremarkable. But look a little closer. Employease, which provides employee benefits administration services to more than 1,000 organisations across America, has an IT architecture chiefly built around open-source software, which makes it a rare bird -- not that it was planned that way when the company was founded in 1996.

"It's been quite a surprise to me. The open-source model just seems intuitively wrong," says John Alberg, the company's cofounder, CIO, CTO and vice president of engineering. But the facts speak for themselves.

The company's 25 production application servers run on Red Hat Linux, having been switched from Windows NT in July 2000. Web pages once delivered by Netscape are now served by Apache, supplemented by Tomcat, an open-source Java servlet engine. Send an email to Employease and it's processed by Sendmail, an open-source mail server, while the company's software developers use XEmacs, an open-source development tool.

But that's not all. Although the company's main applications use Informix for database management, Alberg happily confesses that he can see a time when the proprietary software will be displaced by MySQL, an open-source relational database system already used by the company for less critical applications. Snort, an open-source intrusion detection tool, is also under active consideration, says Alberg.

Once seen as flaky, cheap and the work of amateur developers, open source has emerged blinking into the daylight.

Companies such as Employease herald a sea change in corporate attitudes toward open-source software. Once seen as flaky, cheap and the work of amateur developers, open source has emerged blinking into the daylight. With unrestricted access to the source code to run or modify at will, and support coming from an ad hoc collection of software developers and fellow users, the open-source model is very different from proprietary software. But it is nevertheless proving attractive enough for a host of CIOs to make the switch.

Myth 1: The attraction is the price tag
One of open source's most touted benefits is its price. Download the software, install it -- and don't pay a penny. That's the theory. But to a surprising number of open-source user companies, the price tag -- or lack of one -- is irrelevant. "It's not about being cheap," insists Employease's Alberg. "It's about doing our jobs effectively -- and we're willing to pay quite a bit for that. We want stable software that does what it says it will do."

What Alberg finds fascinating about moving to open source is the performance improvement that resulted. The move to Linux, for example, dramatically cut the rate of server failure experienced by the company. Typically, under NT, one of the company's servers would fail each working day. Now, he says, "we get at most two failures a month -- and often don't get any in a month."

Linux also runs Alberg's applications faster than NT, a fact that has meant that despite more than doubling its business since 2000, the company hasn't needed to buy more servers. "Linux increased our capacity by between 50 per cent and 75 per cent," says Alberg.

Even so, Alberg is careful to make clear that his commitment to open source isn't the blind buying behaviour of a zealot. He wouldn't, for example, go open source if it were more expensive than proprietary code. "Solaris is a strong commercial operating system. We'd choose it over open source if we found it to be less expensive," he says. "[While] cost is a huge driver for our decision-making process, we cannot risk choosing an inferior solution to save money. We couldn't even consider open source if it weren't at par with -- or in some cases better than -- commercial alternatives."

Ask many users of open source and a similar story emerges. "Cost savings weren't really a factor in our decision to go open source," says John Novak, CIO of 330-plus hotel chain La Quinta, which is moving its online booking system -- previously on BEA's WebLogic -- to a combination of Apache, JBoss and Tomcat. "What got us into it was that it was simply the best technology open to us."

Myth 2: The savings aren't real
Open-source software has been described as "free, as in a free puppy." And yes, the absence of software licensing fees needs to be offset along with the costs of training, support and maintenance. On the other hand, proponents of open source also cite reduced costs of "vendor churn," where vendors require users to migrate to a new version or pay for extra support. Most users we spoke to for this story reported a net savings with open source -- often a substantial one.

At Sabre Holdings -- the company behind Travelocity, the Sabre Travel Network and the Sabre travel reservation system -- a major migration to open source is under way, prompted by Sabre's prediction that the move will yield savings of tens of millions of dollars during the next five years.

The company runs two distinct groups of computers, explains CTO Craig Murphy. Where reliability is paramount, Sabre Holdings uses pricing -- or "data of record" -- applications, which run on high-spec, HP's fault-tolerant NonStop systems.

Shopping applications -- where customers and travel agents hunt for the best deals -- run on a server farm of lower-cost machines. Each shopping computer has its own open-source MySQL database, explains Murphy, synchronised by an application from GoldenGate with the rules, fares and availability information held on the fault-tolerant "data of record" system. The shopping systems were on HP-UX, but by the beginning of this month, all of those servers will have switched over to an open-source operating system -- Red Hat Enterprise Linux AS.

The big attraction of open source is that there's a zero marginal cost of scale because open source doesn't require additional licences as an installation grows, he says. As a result, the cost per transaction plummets as you add more systems. Exact comparisons are tricky, says Murphy, "but where we can make like-for-like comparisons, we're expecting at least an 80 per cent reduction in running cost."

Myth 3: There's no support
According to Gary Hein, an analyst with technology consultancy Burton Group, technical support is a potential open-source user's primary concern. "Who do you call when things go wrong? You can't wring a vendor's neck when there's no vendor," he says.

In practice, the situation is complex. As Hein points out, most open-source projects have a large corps of developers, Internet mailing lists, archives and support databases -- all available at no cost. That's the good news. The not-so-good news is that there's no single source of information. "A simple question may result in multiple, conflicting answers with no authoritative source," he says.

Even so, says Klaus Weidner, a senior consultant with technology consultancy Atsec, multiple sources of support can be better than being tied to one vendor -- especially when that vendor provides bad support or refuses to continue supporting software of a certain vintage.

In practice, existing users of open-source software appear perfectly happy with open-source support arrangements. "The breadth of resources available for open-source applications is so great worldwide that we can get support, communicate with a developer or download a patch no matter the time of day," says Thomas Jinneman, IT director of RightNow Technologies, an ASP that hosts customer service products for more than 1,000 companies worldwide, including British Airways, Cisco Systems and Nikon.

The company's hosting environment runs on Linux, Apache and Tomcat, and 97 per cent of its customers use MySQL, says Jinneman. Indeed, he adds, "we've had more trouble getting support for some of our purchased commercial applications than we've had with open-source applications."

Some open-source applications also have support offered by the original developers. JBoss, for example, is backed by JBoss Group, which includes the 10 core developers who wrote the application. Depending on the contract, explains JBoss Group President Marc Fleury, users can obtain 24/7 professional support with as little as a two-hour response time. The group also offers training.

A similar model also underpins Sourcefire, whose founders created Snort, the popular open-source intrusion detection tool. Downloaded off the Internet, Snort is command-line-driven, explains Sourcefire CTO Martin Roesch. Enterprise users can set it up themselves -- but more and more are contracting Sourcefire to do it instead so that the company can handle security management details.

"What I like is that you get all the advantages of open source in terms of people working on it, as well as the advantages of a commercial enterprise behind it in terms of longevity and liability," says Kirk Drake, vice president of technology for the National Institutes of Health Federal Credit Union.

Myth 4: It's a legal minefield
A variety of open-source licences exist, and helping CIOs understand their implications is good business for lawyers -- very good business. "[CIOs'] concerns chiefly revolve around the implications of using code to which they can't verify their right to use," says Jeff Norman, a partner in the intellectual property practice of law firm Kirkland & Ellis. "Just because you've got a piece of paper saying that you own the Brooklyn Bridge, it doesn't mean that you actually own it."

For some users, third-party indemnification is an option. In 2003, for example, JBoss Group announced it would indemnify and defend its customers from legal action alleging JBoss copyright or patent infringement. Other vendors of open-source software -- including HP, Red Hat and Novell -- also offer indemnifications of varying types.

And while conceding that the situation isn't perfect, Sabre's Murphy says that he's heard all the legal arguments he needs. "It's a concern, sure, but we've basically got to do this. There may be friction and challenges -- but I don't see any showstoppers."

Myth 5: Open source isn't for mission-critical applications
Mission-critical apps don't come any more crucial than those in banking, where transaction systems simply have to work, period. Experimenting with open source, with its attendant risks in terms of potential infringement, security and maintenance, might be regarded as anathema. "Banks tend to be conservative institutions -- first followers, if you like, rather than leaders," says Clive Whincup, CIO of Italian bank Banca Popolare di Milano, who freely admits that the bank's venture into open source was the result of "some fairly lateral thinking."

But walk into Banca Popolare's smart new branch on the Via Savona in Milan's Zona Solari district, and the service these days is much faster than customers have previously experienced. The reason? Unwilling to throw out the bank's legacy banking applications, totalling some 90 million lines of Cobol, but unable to keep them running under IBM's vintage OS/2 Presentation Manager operating system, Whincup has used a proprietary legacy integration tool from Jacada to connect the Cobol to IBM's WebSphere -- running in a Linux partition on the bank's mainframe.

The result: formerly disjointed applications now run slickly in a Web browser, yielding faster transaction times, less time spent training tellers -- and many more opportunities for cross-selling the bank's services.

Billed by insiders as one of Europe's largest Linux projects, the Zona Solari branch is piloting the new system, says Whincup. Once testing is complete, full rollout will begin in May. One decision to be made before then: whether to leave the branch desktops running Windows XP, as in the Zona Solari pilot, or move them to Linux as well. "Both of the next two branches to pilot the system will be using Linux [on the desktop]," Whincup says.

Myth 6: Open source isn't ready for the desktop
At Baylis Distribution, a transport and distribution company, IT Director Chris Helps came across the MySQL database four years ago when the company was looking to create a data warehouse. Around the same time, the company began experimenting with Linux, he says, for small-scale, non-critical applications. The move to mission criticality came last year after the vendor of the company's propriety logistics management system, Chess Logistics, brought out a new version that ran on Linux -- a version that promised to improve performance by a factor of between 10 and 15 times. Helps happily signed up, and he hasn't regretted the decision.

But his experience of running Red Hat Linux in a true production environment, with users logging on to the main Linux server from what he describes as "thin clients with a cut down Linux operating system," prompted him to re-evaluate the company's desktop policy. In the end, the company opted to replace Microsoft on desktops with Linux and open-source personal productivity tools for activities such as word-processing and spreadsheets.

"We've not done a formal evaluation of the savings, but a broad-brush calculation is that it costs $1,820 (£1,000) per seat to install a PC with all the Microsoft tools a user needs. With Linux, and open-source tools, it's only around half that," Helps says. What's more, usability improved. "People can log in from any PC in the group and have all the same services and facilities available to them as if they were sitting at their own desks." Better still, IT support is simplified. "We haven't got the complications of users establishing a unique personalised environment on their desktops: We've got better control, better upgradeability and better traceability."

Nor is Helps alone. Other IT shops -- as big and diverse as Siemens Business Services and the Chinese government -- are also convinced that Linux is ready for the desktop. Siemens, for example, says it has performed extensive testing with "real-world, non-technical workers," finally declaring that Linux has now matured as a desktop system. The tests confounded the company's expectations. "We [at first] didn't see Linux on the desktop as a major market, but we were wrong," says a spokesman for the 35,000- employee organisation that serves more than 40 countries.

The bottom line
Is open source right for every organisation? In the end, argues Andy Mulholland, chief technology officer for Cap Gemini Ernst & Young, it's a question of attitude. "The arguments for and against open-source software often get very trivialised," he says. "It's not a technology issue; it's a business issue to do with externalisation."

Companies with an external focus, he says, which are used to working collaboratively with other organisations, and perhaps are already using collaborative technologies, stand to gain much more from open source than companies with an internal focus, which see the technology in terms of cost savings.

"The lesson of the Web is that standardisation is better than differentiation," Mulholland claims. "Is there a virtue in doing things differently? Is there a virtue in doing things the same way as everybody else?" As the past decade has shown, standardisation with a proprietary flavour -- think Microsoft -- has its drawbacks: bloatware, security loopholes, eye-popping licence fees and an unsettling reliance upon a single vendor.

In offices around the globe, an era of open-source standardisation, determined to condemn such drawbacks to history, may be dawning.

Souce: http://www.techworld.com/opsys/features/index.cfm?featureid=1703
 

SSL versus IPSec VPN

Technology Updates

You're comfortable with the security of your network inside the office, but how do you feel about a salesman using his laptop to access your network from the local Starbucks?

It's easy to control security within the physical walls of your plant, but providing secure remote access to internal resources for externally connected users is more difficult. IPsec (IP security) and PPTP (Point-to-Point Tunneling Protocol) VPNs, and sometimes SSH tunneling, are enough, but these setups often have problems with NAT (Network Address Translation) traversal, firewalls and client management. An SSL (Secure Sockets Layer) VPN should solve those problems while still providing robust and secure remote access. However, an SSL setup comes with its own difficulties, such as problems with browser support, required increased privileges on the client computer for anything other than pure HTTP applications and the inherent security problem of cached data on the browser. For more information, see "ABCs of Remote Access".

Compare and Contrast

IPsec is a Layer 3 VPN: For both network-to-network and remote-access deployments, an encrypted Layer 3 tunnel is established between the peers. An SSL VPN, in contrast, is typically a remote-access technology that provides Layer 6 encryption services for Layer 7 applications and, through local redirection on the client, tunnels other TCP protocols. From a purely technical standpoint, you may be able to run both IPsec and SSL VPNs simultaneously, unless both the IPsec and SSL VPN products use installed client software on the user's computer. In that case, you may have stack conflicts.


SSL VPN Vs. IPSEC VPN

Organizations often base their VPN choice on cost, configuration and usability. If you're looking for a network-to-network VPN, the only real choice is IPsec. Check Point Software Technologies, Cisco Systems, Juniper Networks, Nortel Networks, Sonicwall and WatchGuard all offer IPsec VPNs with integrated firewalls. If you go this route, look at the vendor's customer-support track record, determine if security is built into its product and find out what features will be available down the line.

The Easier Path?

IPsec VPN solutions generally are a lot easier to manage. The client-to-gateway tunnel forms a network connection similar to that of dial-up networking. Ephemeral TCP/UDP ports are natively supported. If your traveling users are employing SIP (Session Initiation Protocol)- or H.232-based applications, IPsec has a clear advantage over SSL VPN because it's hands-free on the client side. Once the software is running, users interact with their software and remote services seamlessly.

The IPsec VPN is an open network from the desktop client to the destination network, but that doesn't mean the desktop is just an IP router. Because of the possible split tunneling problem--simultaneous access to a trusted and a nontrusted network--you can limit access through policies set on the IPsec gateway. However, as SQL Slammer demonstrated, a worm-infected host that connects to an internal network over IPsec can infect the internal network. Use the embedded IPsec gateway firewall or place a firewall between the gateway and the rest of the network for added protection.

The leading IPsec VPN gateways from Cisco and Nortel are easy to manage and offer hierarchal group management, tight integration with external authentication servers and extremely useful and detailed event logging on the gateway. The latter is critical when troubleshooting remote-user connection problems.

However, an IPsec VPN may cost you more in the long run. Let's consider license costs: An IPsec VPN typically costs between $10 and $25, while an SSL VPN ranges from $50 to $120 per seat for a 500-user license. At first glance, IPsec VPN seems appealing costwise. But once you factor in the costs for deploying and managing an IPsec client, the additional testing required prior to patching an OS client (remember the Windows XP Service Pack 2 broke many client applications including IPsec) and the lost productivity from users who can't connect to the gateway over IPsec, it may not look like such a bargain. Additionally, many IT managers have found IPsec VPNs to be time-consuming for their staffs to maintain, because end users often need help when downloading software or maintaining their connections.

Source: http://www.secureenterprisemag.com/showArticle.jhtml;?articleID=169400385
 

Fuzzers Used for Buffer Overflow Detection

Writing perfect secure code is hard. Daniel J. Bernstein has probably come the closest to it in practical, publicly released software. With his almost maniacal drive for security perfection, he has written multitudes of software that remain unbroken.

There was a reported bug in one of his mailing programs, but it was so obscure and unlikely to be used in real life that he refused to call it a security bug. You might be able to argue that point, but the fact is, that's only one obscure bug over many years of programming. Not many professional programmers can say that.

Then again, I haven’t seen him manage a large team of programmers writing millions of lines of code. I suspect that making a large team of programmers as passionate and careful about security programming as he is would prove more difficult than writing perfect code.

Many studies say that there are five to 10 bugs (albeit not all security holes) per 1,000 lines of code in the average program. No matter how hard you try to get rid of them, no amount of testing can beat every hacker in the world banging on your program. Just ask David LeBlanc, chief software architect for Webroot Software. He is the co-author of the best-selling book Writing Secure Code and was a leading security architect at Microsoft (Profile, Products, Articles) for six years.

David is a geek’s geek. When he starts talking about buffer overflows and how to prevent them, not many people argue -- he knows his stuff. During his tenure at Microsoft, he was instrumental in getting the company truly focused on more secure coding. The results of his efforts can be seen in Microsoft Office (been bothered by a macro virus lately?), Windows Server 2003, and IIS 6.0. Analysts have lauded all of them for their overall security and reliability, especially when compared to previous versions of the same.

David is passionate about secure coding. He taught Microsoft programmers how to write securely and gave them tools and methodologies to help. The process involved education, self review, peer review, team review, external review, and automated security tools. But try as he might, David couldn’t prevent all coding mistakes and buffer overflows.

This perplexed me a bit, because David’s as bright as they come. Working for Microsoft on a high-visibility product, he had senior management’s attention and support; and he had probably what comes as close to an unlimited budget as any of us will ever see in the private sector.

I asked David how those coding mistakes slipped by. Was it a lack of perfect tools, or was it human error?

As I should have guessed, he says both. Humans are ultimately to blame, but better tools would have helped when reviewing and approving the vast amount of code. There is just no way for a human being to catch every possible mistake that could go wrong with every line, especially when the coders are writing under a competitive deadline on complex software. That's where fuzzers can help.

A fuzzer is a software program or script designed to look for possible errors in a piece of programming code or script. The ultimate fuzzer would look for every input variable and try every possible allowable combination of input, hoping to find buffer overflows and unhandled coding errors. Fuzzers find most of the buffer overflows these days, and white- and black-hat hackers alike use them.

Souce: http://www.infoworld.com/article/05/09/09/37OPsecadvise_1.html

Professional bug hunters, such as eEye and Core Security Technologies (Profile, Products, Articles) (maker of penetration-testing tool Core Impact) find many of their bugs using fuzzers. A professional hacker friend of mine who works for the U.S. government (I’d tell you who he is, but then I’d have to kill you) agrees that fuzzing finds most of the bugs. He says fuzzers work so well that the hard part in writing a fuzzer is trying to give it intelligence, so it knows when it finds an error instead of relying on human intervention and observation.

There are many free fuzzers available on the Internet. For example, iDefense’s Filefuzz program lets you malform many different Windows file formats. SPIKEfile does the same thing for Linux (Overview, Articles, Company) files. The HTML Manglizer fuzzes HTML parsers. It was responsible for finding the Download.ject exploit (thanks to Karl Levinson for this one). Many fuzzers, such as Smudge, are written in scripting languages like Python.

With that said, even fuzzer-reviewed code will still contain mistakes, because fuzzers are written by humans and can implement only the mistakes that a human could possibly think of.

If you are in charge of coding anything, you need to program securely. Get educated in secure coding, follow a secure coding methodology, and consider using an automated program to help find the bugs. A good fuzzer might help. If you don’t use one, there's a good chance a hacker will.

Friday, September 09, 2005

 

Cached Clipboard information make leak Your Credit Card

We do copy various data by ctrl+c for pasting elsewhere. This copied
data is stored in clipboard and is accessible from the net by a
combination of Javascripts and ASP.

Just try this:
1) Copy any text by ctrl+c
2) Click the Link:
http://www.friendlycanadian.com/applications/clipboard.htm
3) You will see the text you copied on the Screen which was
accessed by this web page.

Do not keep sensitive data (like passwords, creditcard numbers, PIN
etc.) in the clipboard while surfing the web. It is extremely easy to
extract the text stored in the clipboard to steal your sensitive
information.


Wednesday, September 07, 2005

 

Download Visual Web Developer 2005 Express Edition Beta 2 Today


What do you get when you combine the "drag and drop" ease of Visual Basic development with the blinding speed of ASP.NET 2.0?
Microsoft® Visual Web Developer™ 2005 Express Edition.
Download Visual Web Developer 2005 Express Beta 2 and you will be well on your way to becoming a Web development superhero.

Visual Web Developer 2005 Express gives you everything you need to easily design, build and deploy powerful, dynamic Web applications faster than ever before.



http://msdn.microsoft.com/asp.net/getvwd/

Monday, September 05, 2005

 

Minimizing Downtime with Disk Image Restores

Increasing productivity and reducing costs has become the mantra of I.T. and network managers since the Internet bubble burst. No longer is money flowing to new technologies just because it's there; today's I.T. managers are more pragmatic and sensitive to investments that won't deliver a fast return on investment and cannot be quantified.
All too often, I.T. managers fail to consider how unintended expenditures can result from the use of traditional technologies that work "well enough" that they haven't been replaced.

Many of these legacy technologies fall under the general categories of disaster recovery and business continuity. Preparing in advance for interruptions in your business operations is critical to surviving them, but some companies today still consider legacy technologies to be "sufficient."

Today's I.T. managers are bombarded by a complex array of technologies that promise to provide various levels of backup and data security . The problem is, in some cases the resulting backup is crippled by not having all of the information the I.T. manager requires for a baremetal restore.

The less complete the backup of a server disks, the longer it will take to restore the system should a disaster befall you. As a result, an I.T. department ultimately could spend a considerable amount of extra time and as a result, money, duplicating work they've already done simply because they lacked some necessary disk imaging software.



Traditionally, enterprise-class backup has been a file-based backup to tape. Tape is relatively reliable and inexpensive, but it's slow serial, takes up quite a bit of space and requires far more maintenance than disk storage. In addition, while tape can be used for disk imaging, its speed and capacity makes it more suited for off-line archiving of file-based backups rather than being used for online, image-based backups.

Today's disk-to-disk back-up strategies provide far superior performance, but at a cost. If the server is using SCSI, iSCSI or fibre channel storage devices, the hardware infrastructure cost can be significant. An IDE-based array is far less expensive, but it has limited usefulness in a large enterprise, where IDE is generally relegated to desktop systems.

However, a new generation of IDE drives -- those that spin at 15,000 RPM -- could change that. A less-expensive network-attached storage server could significantly improve the return on investment for disk imaging-based storage subsystems.

Bad Things Happen to Nice Computers

Backups are the life blood of any enterprise. They need to be portable and be part of an overall disaster recovery/business continuity process. Understanding how to make backups portable, so that they can be stored either offsite, in a vault, or simply physically away from the server, is a basic task of any I.T. manager.



Online or near-line storage of recently-archived data remains quite common, particularly in a hierarchical storage management (HSM) environment. Using some of that online or near-line storage for housing disk images of live server disks can significantly enhance your ability to recover from a disaster by significantly reducing the time it takes to access and restore necessary applications, the OS, patches, updates and, of course, data.

By far the most common types of disasters hitting enterprises today are various types of malware -- viruses, Trojans and other malicious code deliberately or accidentally introduced via e-mail, downloads or user software that has not been vetted by the I.T. department before being loaded onto workstations.

If your antivirus software doesn't catch the malware and all too often new strains of old viruses seep through even regularly updated antivirus programs, you're in for potentially serious systems problems. In a best case scenario, only one system is affected and then not seriously.

However, that best-case can easily turn into a worst-case if that one system turns out to be your mail server, web server, SQL server or other mission-critical system.

Potential lost revenue is not calculated only by lost sales lost or other direct transaction-based operations; it is calculated by lost productivity of all affected employees, lost goodwill of customers and potential customers who view a downed server as a lack of appropriate I.T. oversight and other factors.



All I.T. managers should have a checklist of items that they use to ensure that if a problem occurs, they will be able to continue with their business operations with the least amount of downtime . High on that list should be a plan for a bare-metal restore of affected servers.

An exact image of your server disks stored on a remote network or a removable drive will provide you with the fastest bare-metal restore possible. Remember that you're not just recovering the user data files and a clean install of the operating system, but also all of the OS security patches and updates, applications, the, the applications' security patches and updates, as well as numerous configuration files and other custom programming.

On top of that, remember to factor in the time it takes to collect all of the server applications, serial numbers, updates, patches and such. Depending on the organization of the I.T. department, this task conceivably could take more time than the software installation and configuration itself.

File vs. Folder vs. Partition

Of course, if the loss is localized or limited, you might only need to restore a single file or folder. Here again, time can be of the essence, depending on the severity of corruption and the file or files corrupted. If the damaged files are operating system files, this could significantly impact the ability for the I.T. manager to get the system back up and running quickly.

In such a case, it is useful to be able to boot the server independently of the installed OS. By bypassing the system OS, you can restore the damaged file without resorting to a major reinstallation of the full OS.

Acronis True Image Enterprise Server, for example, uses a Linux-based emergency rescue disk. Should a Windows 2003 Server fail due to a corrupted file, the I.T. manager can boot the individual server, restore the specific files that have been damaged, then reboot the server as if nothing has happened at all.

In fact, if the I.T. manager doesn't know which specific files were damaged, an entire directory can be restored just as easily as restoring a single file.

The process is incredibly simple. After booting from the emergency rescue disk, the I.T. manager can mount the image of the affected system as a virtual drive. The interface is a standard Windows XP Explorer-like graphical user interface. The requisite folder is identified and using a simple drag-and-drop, the image is copied back onto the damaged drive. The virtual image is then unmounted -- a one-click function -- and the system is rebooted back to the original OS.

The time it takes to restore the damaged files is literally minutes, not hours or days. In fact, the time it takes to restore an entire disk drive from an image can be measured in minutes.

When talking about return on investment, it's useful to have some sort of measure on which to base the number. Extensive analyses have been performed to calculate downtime costs to organizations when their servers fail. However, there is another important calculation of downtime that often gets overlooked.

There is a significant productivity difference between disk imaging software that images live servers versus those programs that require the I.T. manager to boot the server to DOS first. This becomes very acute at the workstation level. Here's why: Let's assume that a company has 2,080 employees, each all of whom images their workstations once per week and it takes one hour to create the image. Let's also assume that the server is imaged once per week for a full backup, with incremental images made nightly.

If the workstations have to be booted to DOS in order to be backed up, that means that every week the company will have 2,080 instances of nonproductive employee time. That's the equivalent of one employee's work year.

Over the course of one calendar year, the company will end up paying the equivalent of 52 employee years of work that wasn't done. That's roughly the same as adding 52 additional employees to the payroll (minus payroll taxes and other load) -- or 2.5 percent of the company payroll expense.

And that calculation only addresses workstations that have to boot to DOS in order to be imaged. If the server has to be booted to DOS as well, that complicates the equation even more.

Incremental Backups

Ensuring that you always have a current version of your server disk is critical to any disaster recovery plan. However, imaging a server disk daily can be time-consuming. As an alternative, you might consider creating a master image weekly and incremental images on a daily basis.

Incremental images only image those sectors of a disk that change. In the vast majority of enterprises, the operating system and applications are kept on separate partitions from the user data. By scheduling incremental images nightly on each partition, you can keep an exact copy of your server disks current while minimizing the time it takes to image the system.

Normally, the server image will be stored on a networked drive. Point the incremental image to be stored to the same directory as the primary image. If no full image is found in the target directory, a quality imaging package such as Acronis True Image Enterprise Server will create a full image, regardless of the instructions programmed into the setup.

This is because an incremental image uses the last full image and any interceding incremental images, as a basis for the latest incremental image. If none is found, the software must assume that there is no base image. If your imaging software does not make this assumption, you could end up with a partial and completely useless incremental image.

Change Management

Anyone who has ever tried to upgrade an operating system, patch an application or install "software upgrades" understands the need to have a complete image of a working hard disk. This fact was driven home to many users recently when they tried to upgrade to Windows XP Service Pack 2.

Sometimes an upgrade will crash an application, damage a network connection or cause other unanticipated problems. By having an image of the hard disk in a known, working state, no upgrade, virus or other software change will completely trash a system. Restoring a known, good image will dramatically decrease potential downtime due to problem software installations and upgrades that simply don't work.

Disk Cloning

Efficient disk imaging software can provide another important function in the I.T. department disk cloning. In many situations, such as providing a standard laptop notebook environment to a sales force or deploying multiple servers, the I.T. manager wants to keep the operating environments identical. A base image can provide that.

In the case of a mobile workforce, the base image might be the company product database , a contact manager, a standard office suite and a preconfigured network setup. By using a standard setup, the I.T. manager can save considerable time when deploying systems to new sales staff. In the case of a new machine, an image with all of the necessary information can be laid down and the machine can be sent to the new employee almost immediately.

If the laptop in question had been used by a previous salesperson, their database can be uploaded to the corporate database. Laying a new image over the existing drive not only will provide the new salesperson with a fresh install, but it also eliminates the possibility of a former, disgruntled employee from setting off a hidden virus.

It removes any software changes a prior employee might have made and overwrites any private data that might not be appropriate for the new employee.

Disk cloning also works with networked workstations. An image stored on a server can be deployed to multiple desktops. This eliminates the need for I.T. personnel to physically touch every new system being deployed. A multicast image can configure multiple systems simultaneously; conversely, you also can image multiple servers or workstations.

Where server deployment is required, laying down a fresh install of the operating system, all necessary patches and upgrades, all configuration files and the like could save an I.T. engineer hours of work. A clean, tested disk image means that a standardized server with a known, good configuration can be ready to deploy in a single day.

There is no need to start testing all network configuration information from scratch -- a disk image that includes a preconfigured network configuration (sans the IP address, of course), can eliminate a lot of redundant work.

Conclusion

Disk imaging plays an important part in not only disaster recovery and bare-metal restores, but also in disk deployment and change management. Being prepared for a disaster before it happens will go a long way in saving considerable amounts of time and money.

It's not enough today to just have data backups; a full image of the server disks can save literally days of installation, patching and configuration time. And when you're not recovering from a disaster, you can be sure you'll be deploying new software, managing software upgrades and changes and spending a lot of time managing your Windows-based desktops.

As any I.T. managers worth their salt will tell you, managing server and desktop software is not unlike juggling -- you try to keep the dozen balls in the air at all times and hope they don't fall on the floor. When they do, disaster strikes and you've got to be ready.

Source : http://www.cio-today.com/news/Minimizing-Downtime-with-Disk-Images/story.xhtml?story_id=0010002CKD92
 

Windows Firewall flaw may hide open ports

A flaw in Windows Firewall may prevent users from seeing all the open network ports on a Windows XP or Windows Server 2003 computer.

The flaw manifests itself in the way the security application handles some entries in the Windows Registry, Microsoft said in a security advisory published Wednesday. The Windows Registry stores PC settings and is a core part of the operating system.

The bug could allow a firewall port to be open without the user being informed through the standard Windows Firewall user interface, according to the Microsoft advisory. The company has released a fix that can be downloaded from Microsoft's Web site and will be part of a future Windows service pack, the company said.

Microsoft said the firewall issue is not a security vulnerability but said the flaw could be used by an attacker who already compromised a system in an attempt to hide exceptions in the firewall.

Previous Next For example, miscreants who have penetrated a computer could create and hide a firewall exception by inserting a malformed Windows Firewall exception entry in the Windows Registry. "An attacker who already compromised the system would create such malformed registry entries with the intent to confuse a user," Microsoft said.

Like other firewall software, Windows Firewall is meant to block incoming traffic to a computer. Users can allow incoming connections by creating exceptions. Windows Firewall displays these exceptions in the firewall UI, which can be reached by going to the Windows Control Panel and selecting Windows Firewall.

PC users can view all firewall exceptions--including those the unpatched Windows Firewall doesn't see--through other tools, Microsoft notes. Typing "netsh firewall show state verbose = ENABLE" at a command prompt will display all active exceptions, the company said in its advisory.

Souce: "http://news.com.com/Windows+Firewall+flaw+may+hide+open+ports/2100-7355_3-5845850.html?tag=nefd.top"

This page is powered by Blogger. Isn't yours?