lunes, 25 de abril de 2011

Relationship between security and usability – authentication case study

Abstract—In this paper there is discussed relation between seemingly independent aspects of software quality – security and usability. This relation is demonstrated in case study of password authentication. For this purposes a method of password security is suggested and described in this paper. This method consists
in mathematical model of dictionary attack and brute force attack.
This model is used to break passwords gained from two studies. In these two studies different groups of end users were instructed to select a password by a different way. Afterwards, in security of selected passwords was examined and compared with their usability and this relationship were examined


I. INTRODUCTION

Data security is an actual issue that is being discussed, especially in the public administration domain and solving spatially oriented problems  for the value of information that data contain. One of requirements on secure information systems is a secure authentication of persons working with these systems.
Although many mature authentication mechanisms exist (for example smart cards, biometrics), currently passwords are still used for these purposes. The reasons of passwords using are low expenses and easiness of implementation.
Although this way of authentication is generally accepted by end users, passwords have many of the deficiencies arising from limitation of human memory. It is difficult for end users to remember long strings that contain randomly generated characters. That is why the end users select as their passwords commonly used words like names of football clubs, names of pets and so on. Sure, these weak passwords are not resistant against a dictionary attack and a brute force attack.
In the recent literature there exists an evidence of weakness of real used passwords against these types of attack.
When forcing the users to create strong passwords (it means passwords that are long enough, randomly generated and used only to one system), the users write them down or forget them. This user behavior can make social engineering attack easier.
That is why the passwords authentication appears to involve a tradeoff. It seems more secure password means the less usable password.
Generally, usability of user interface is the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Usability is one of quality aspects of software and consists of the following criteria: learnability, efficiency, memorability, errors and satisfaction and can be examined in different types of user interface, from commercial web pages to e-learning systems.




II. PROBLEM FORMULATION

As mentioned above, passwords authentication appears to involve a tradeoff between security and usability. A lot of authors frequently discuss about the factors that influence password security, for example: length, randomness, and the period the password is used. Some authors are trying to make a distinction between a “weak” and a “strong” password, commonly by using an expert’s opinion. Other authors are trying to break passwords, and the results of their experiments are present as a proof of the passwords weakness.
The authors of this paper are convinced about the need for investigation of an influence of security and usability. As a case study the authors decided to investigate just a passwords authentication. Next, authors feel necessity of more exact evaluation of the security of passwords.
For this reasons the authors are suggesting the exact measure of security of given password and conducting surveys and experiments with the goal to compare different security level passwords with their usability.




III. SECURITY OF GIVEN PASSWORD
A. General Principle

There are various factors that influence a password authentication security. As it is depicted on fig. 1, that is modified on the base of [16], it is possible to divide these factors into two basic groups. The first group is formed by human factors and the second group by technological factors.

Human factors that influence can be divided to two categories:
  • Type of password (length, randomness, used characters, etc.)
  • Mode the user guards a password (how often a user change his password, whether the user writes a password down, and so on)
Since users are thought to be the weakest link of every security solution, it is necessary to study their behavior. We are convinced of the need to study how users choose their passwords, because it evidently infers of security of this kind of authentication.
Because we are interested in passwords type and not technological factors, as a measure of security of a given password we suggest the expected value of the number of attempts an attacker has to carry out to break the password. The advantage of this criterion is non-dependence on technology factors. Time and cost criteria can be derived from this genuine criterion if needed. For example, it is not difficult to determine how many attempts you are required to make per hour in order to successfully crack a password, at a network
level. The evaluation of passwords from a security point of view is composed of two phases:
  1. Attack simulation model
  2. Password security evaluation, on the base of attack simulation model
B. Attack Simulation Model

When constructing a model of dictionary attack and a brute force attack we formulate two assumptions:
  1. Attackers are choosing the most effective way of attack.
  2. Attackers know the types of passwords users are selecting.
For simplicity but without losing accuracy, we can think a brute force attack is like a special kind of a dictionary attack. The size of this virtual dictionary can be calculated by eq. (1).

Now we can consider a dictionary attack and a brute force attack to be a well-considered sequence of tests performed when trying to know whether a password is a word from a given dictionary. The question is “What dictionary does an attacker use?” on the first attempt, the second, and so on.
Based on the assumptions previously discussed, the attacker prefers dictionaries that maximize the probability of his success and minimize the number of attempts to break the password. This criterion can by expressed by eq. (2).
Because we expect the attacker will not test words he has already tested, when sorting dictionaries we recursively remove the used words and reassess unused dictionaries.
The overall process is described by the following algorithm.
Step 1: Gather passwords that were used in a given environment by a given kind of users.
Step 2: Gather all possible dictionaries that can contain passwords gathered in step 1. These dictionaries will be used for dictionary attack simulations.
Step 3: Create virtual dictionaries that consists of all one-character strings, two-character strings, and so on, and that can contain passwords gathered in step 1. The sizes of these dictionaries NVD
Step 4: Calculate the success rate of the dictionary attack for every dictionary SDA(d), using Eq. 2.
can be calculated by Eq. 1. These dictionaries will be used for brute force attack simulations.
Step 5: If the success rate of the dictionary attack SDA(d) for every dictionary is zero, stop this algorithm, otherwise continue.
Step 6: Select dictionary with maximum attack success rate. This dictionary will be used in the attack simulation model in the order this dictionary was selected.
Step 7: Delete all the words that the selected dictionary contains from the remaining dictionaries. A new set is created for the remaining reduced dictionaries.

Step 8: Repeat step 4 for the set of remaining reduced dictionaries.

C. Password Security Evaluation
The result of previous algorithm is a sorted set of reduced dictionaries that the attacker can use in the event he wants to break a password in the most effective way. Now, it is easy to calculate the security of a password, which is defined as the expected value of number of attempts the impostor has to carry out to break a password, with help of Eq. 3.


D. Ordered list of reduced dictionaries
In 2008 we collected 1,895 passwords that were really used on web pages. All users who were selecting passwords were Czech speaking. Passwords had to contain a minimum of one character and maximum length of the password was not restricted. Users had no time limit when selecting a password and passwords could contain arbitrary characters typed using a keyboard.
Firstly, Exploratory Data Analysis (EDA) was applied to the first password collection. The goal of this analysis was to create the basic assumptions about users’ behavior, and for pertinent dictionaries selection. Diacritic characters were rarely used in passwords, only in 1.8% passwords. Further, only 10.6 % of passwords contained an uppercase character and 23.2 % of passwords contained a minimum of one
numeral.
Users did not use a long string passwords, the length of passwords was about 6 characters (see fig. 2).
After dividing the acquired passwords into four groups, in relation to the “randomness” of the password, it is possible to see that users prefer common words as their passwords, as you can see in fig. 3.


This assumption is proven when you test the correlation coefficients hypothesis between the frequencies of characters in passwords and the frequencies of characters in Czech words (Kendall rank correlation coefficient equals 0.78) – see table 1.




TABLE I
FREQUENCY OF CHARACTERS
CharacterFreqiency in CzechFrequency in passwords
A0.0860.158
B0.0170.024
C0.0330.027
D0.0360.041
E0.1050.082
F0.0020.009
G0.0020.011
H0.0220.020
I0.0750.065
J0.0220.022
K0.0360.064
L0.0420.051
M0.0350.039
N0.0680.062
O0.0800.070
P0.0320.026
Q0.0000.001
R0.0490.065
S0.0630.044
T0.0510.047
U0.0400.028
V0.0430.020
W0.0000.007
X0.0010.005
Y0.0280.006
Z0.0320.008






TABLE II
CORRELATION OF CHARACTERS
Kendall Taup-value
Password & Czech0.780.000000
Password & English0.620.000008

After Exploratory Data Analysis we gathered potential 35 dictionaries that could contain passwords we collected in this research study. We used the algorithm discussed above and created the ordered list of reduced dictionaries. The final order of these reduced dictionaries is as follows:
1) Czech First Names (490 words)
2) Common Czech Words (382 words)
3) Common Passwords (239 words)
4) Czech First Names (the first character uppercase) (490 words)
5) Years 1900 – 2029 (114 words)
6) Common Logins (2,131 words)
7) The Most Commonly Used English Words (391 words)
8) Czech and American Word Combinations (496 words)
9) Word Personages (437 words)
10) American Women Names (4,414 words)
11) American Men Names (3,020 words)
12) Slovak Dictionary (17,952 words)
13) Common Word Connection (796 words)
14) Electronic Firms (41,053 words)
15) Foreign First Names (8,801 words)
16) Czech Dictionary (157,228 words)
17) Bible Characters (10,654 words)
18) Unusual First Names (4,612 words)
19) English Dictionary (317,410 words)
20) States and Towns (68,729 words)
21) Big English Dictionary (581,000 words)

The next 15 complementary dictionaries were formed by virtual dictionaries that simulated a brute force attack that followed a simulated dictionary attack. There is a list of this virtual dictionaries:
22) 1-character words dictionary (36 words)
23) 2-character words dictionary (1,296 words)
24) 3-character words dictionary (46,656 words)
25) 4-character words dictionary (1,679,616 words)
26) 5-character words dictionary (60,466,176 words)
27) 6-character words dictionary (2,176,782,336 words)
28) 7-character words dictionary (78,364,164,096 words)
29) 8-character words dictionary (2,82111E+12 words)
30) 9-character words dictionary (1,0156E+14 words)
31) 10-character words dictionary (3,65616E+15 words)
32) 11-character words dictionary (1,31622E+17 words)
33) 12-character words dictionary (4,73838E+18 words)
34) 13-character words dictionary (1,70582E+20 words)
35) 14-character words dictionary (6,14094E+21 words)
36) 15-character words dictionary (2,21074E+23 words)

The security of passwords from these 36 reduced dictionaries is possible to see in table 3.




IV. EXPERIMENTAL STUDY I
 

In 2009 we conducted an experiment inspired by [18] in which we asked 64 students to choose passwords and write them to questionnaires. These questionnaires also assigned a random password to each student. The random password had from 6 to 7 characters.
Next, students were trained how to create a passphrase - a password based on a mnemonic phrase. After this training the students were asked to choose passphrase and write this passphrase down to the questionnaire.
By this way three passwords were assigned to every student – a common password, a randomly generated 6-7 characterslong password and a passphrase. The students were asked to remember all passwords and do not write them down. Two months later this participants were requested to recall these three passwords and write them down to prepared forms.We found the following results (see table 4):
However, the participants were not actually using the password during the intervening two months. But the
results of this experiment provide a quantitative point of reference for the difficulty of random passwords. From this table (table 2) it is possible to see that self-selected passwords and passphrase passwords have similar results and passphrase passwords are easy to remember like self selected passwords. In the next phase of this experiment we put acquired passwords to the simulated dictionary attack and brute force
attack and evaluated them from the security point of view.
The goal was to compare the security of passwords created by different methods. The results of these simulated attacks are shown in the table 5.

From the results of simulated dictionary attack and brute force attack we can claim, that no random password and no passphrase password is possible to break dictionary attack and these types of passwords have password security more than 1245495. By contrast to these types of passwords, self selected passwords are sensitive against dictionary attack. For example after 930,335 attempts to break self-selected
password, this password will be broken by probability about 0.5 (see fig. 4).




V. EXPERIMENTAL STUDY II

This experimental study that was inspired by [18] was conducted in 2010. The goal of this experimental study was to investigate the tradeoff between security and memorability in the real world context. In this experiment 56 two-years students at University of Pardubice were divided to three experiment groups. Afterwards each student was given a sheet of advices how to create a password depending on the group
with he has been randomly assigned.The three different types of advices were:
  • Control group. The participants in this group were given the same advice as in previous years, with was simply that “Your password should contain both alphabetical and numerical characters and should be long”.
  • Random password group. The participants in this group were given a printed sheet with the letters AZ and numbers 1-9 repeadly on it. They were asked to choose random password by closing their eyes and picking seven character at minimum. The participants were told to write the chosen password down and destroy it once the password was memorized.
  • Passphrase group. The participants in this group were asked to choose a password based on a mnemonic phrase.
The number of participants in these three groups was following (see table 6):

The participants were using their passwords one times a week at minimum. We conducted this experiment one month. During this period we calculated the numbers of requests of password reset in the situation when a student forgot his password). The exact number of these requests it possible to see in table 7.
As it was expected, maximal requests of password reset came from random password group. The reason is that randomly generated password is difficult to remember.

One month after the tutorial session we asked the students to fill questionnaires, asking whether they’d had difficulty remembering ther password. This survey asked the following questions:
  • How hard it was to memorize your password (scale from 1 – trivial to 5 – impossible)?
  • How many weeks did you need to remember your password? 
The results of this survey are summarized in the table 8. From this table it is possible to see that it is difficult o
remember randomly generated password.
At the end of this survey we used gained passwords in model of dictionary attack and brute force attack. As we expected, the results of control group were worse than results both random password group and passphrase group. While it was possible to break 10 passwords from control group by dictionary attack no password was possible to break by this type of attack from password and passphrase groups.The results of these simulated attacks you can see in table 9.


VI. CONCLUSION

Although security and usability are separate aspects of software quality; there exists dependence between these two aspects. This dependence is proved on authentication by passwords. When forcing end users to use more secure passwords, these passwords are less learnable and memorable.
It is confirmed that users have difficulty to remember random passwords. Only 12 percent of users were able
to recall these passwords after two months. But passwords based on mnemonic phrases are more memorable then random passwords and they have the similar security level.
By educating users to use mnemonic passwords we can gain a significant improvement in security.
But we assume that there can by different type of dependency between usability and security. In some cases
a higher usability can results in higher security, when end users do not do mistakes that can result in security faults. As an example a password written down to a calendar because it is very difficult to remember can be noted.

ACKNOWLEDGMENT
This paper was created with a support of the Grant Agency of the Czech Republic, grant No. 402/08/P202 with the title Usability Testing and Evaluation of Public Administration Information Systems and grant No. 402/09/0219 with title Usability of software tools for support of decision-making during solving spatially oriented problems.


REFERENCES

[1] P. Sedlák, J. Komárková, A. Piverková. Spatial analyses help to find movement barriers for physically impaired people in the city environment - Case study of pardubice, Czech Republic. WSEAS TRANSACTIONS on INFORMATION SCIENCE & APPLICATIONS.
Greece: WSEAS Press, 2010, Volume 7, Issue 1, s. 122-131, ISSN: 17900832.
[2] P. Sedlák, J. Komárková, A. Piverková. Geoinformation Technologies Help to Identify Movement Barriers for Physically Impaired People. In Scientific Papers of the University of Pardubice : Series D. Special Edition. Pardubice: Univerzita Pardubice, 2009. p. 125-133. ISSN 1211-555X. ISBN 978-80-7395-209-9.
[3] P. Sedlák, J. Komárková, M. Jedlička, R. Hlásný, I. Černovská. The use of modelling tools for modelling of spatial analysis to identify high-risk places in barrier-free environment. INTERNATIONAL JOURNAL OF SYSTEMS APPLICATIONS, ENGINEERING & DEVELOPMENT, Issue
1, Volume 5, 2011. ISSN 2074-1308.
[4] R. Myšková.Economic dimense of value of information (originally in Czech). Scientific Papers of the University of Pardubice, Series D , 2006 , č. 10 , s. 228-232.
[5] J.Valášek, J.: Zranitelnost prvků kritické infrastruktury. In: Informační zpravodaj, ročník 17, číslo 1, 2006. MV-GŘ HZS ČR, Institut ochrany obyvatelstva, Lázně Bohdaneč. ISBN 80-86640-60-4.
[6] A. AlAzzazi, A. E. Sheikh, Security Software Engineering: Do it the right way, Proceedings of the 6th WSEAS Int. Conf. on Software Engineering, Parallel and Distributed Systems, Corfu Island, Greece,
pp. 19-23, 2007.
[7] Y. C. Lee, Y. C. Hsieh and P. S. You, A New Improved Secure Password Authentication Protocol to Resist Guessing Attack in Wireless Networks, Proceedings of the 7th WSEAS Int. Conf. on Applied
Computer & Applied Computational Science (ACACOS '08), Hangzhou, China, pp. 160-163, 2008.
[8] W. G. Shieh, M. T. Wang. An improvement on Lee et al.\'s noncebased authentication scheme . In WSEAS Transactions on Information Science and Applications. Vol.1, WSEAS Press, 2007. pp. 832-836. ISSN 1790- 0832.
[9] J. Yan, A. Blackwell, R. Anderson, A. Grant, The Memorability and Security of Passwords. Security and usability. O’Reilly Media, Inc. 2005. pp 129-142. ISBN 0-956-00827-9.
[10] F. T. Gramp, R. H. Morris. Unix Operating System Security. AT and T Bell Laboratories Technical Journal 63:8 (Oct. 1984), 1649-1672.
[11] D. V. Klein. Foiling the Cracker: A Survey of, and Improvements to, Password Security (revised paper). Proceedings of the USENIX Security Workshop (1990).
[12] M. Burnett, D. Kleiman. ed. Perfect Passwords. Rockland, MA: Syngress Publishing. 2006. p. 181. ISBN 1-59749-041-5.
[13] International Standards Organisation (ISO). International Standard ISO 9126. Information technology: Software product evaluation: Quality characteristics and guidelines for their use. 1991.
[14] M. Černá, P. Poulová. User testing of language educational portals. E+M Economics and Management, (3) s. 104-117. Liberec 2009. ISSN 1212-3609.
[15] Ch. P. Garrison, An Evaluation of Passwords, On line CPA Journal- May 2008, Accesable http://www.nysscpa.org/cpajournal/2008/
[16] K. Renaud, Evaluating Authentication Mechanism. Security and usability. O’Reilly Media, Inc. 2005. pp 103-128. ISBN 0-956-00827-9.
[17] M. Hub, J. Čapek. Method of Password Security Evaluation. In GUO, Qingsping, GUO, Yucheng. The 8th International Symposium on Distributed Computing and Applications to Business, Engineering and
Science. [s.l.] : [s.n.], 2009. s. 401-405. ISBN 978-7-121-09595-5.
[18] M. Zviran, W. J. Haga. A Comparasion of Password Techniques for Multilevel Atuthentication Mechanism. Computer Journal 36:3 (1993), 227-237.

domingo, 24 de abril de 2011

Web Application Scanners: Definitions and Functions

Abstract
There are many commercial software security assurance tools that claim to detect and prevent vulnerabilities in application software. However, a closer look at the tools often leaves one wondering which tools find what vulnerabilities. This paper identifies a taxonomy of software security assurance tools and defines one type of tool: web application scanner, i.e., an automated program that examines web applications for security vulnerabilities. We describe the types of functions that are generally found in a web application scanner and how to test it.
 
1. Introduction and motivation
New security vulnerabilities are discovered every day in commonly used applications. In the recent years, web applications have become primary targets of attacks. The National Vulnerability Database (NVD) [14] maintained by the National Institute of Standards and Technology (NIST) has over 18,500 vulnerabilities (as of August 18, 2006). These include 2,757 buffer overflow, 2,147 cross-site scripting (XSS), and 1,600 SQL injection vulnerabilities. XSS and SQL injection vulnerabilities occur mostly in web-based applications. 

Figure 1 shows the percentages of the total vulnerabilities reported in the NVD represented by cross-site scripting and SQL injection vulnerabilities. The NVD contains no reports for XSS and SQL
injection vulnerabilities prior to year 2000. The share of these vulnerabilities is large and rapidly growing. On the other hand, the share of the buffer overflows, a widely studied security weakness, has not increased in the last several years.

Web application security is difficult because these applications are, by definition, exposed to the general public, including malicious users. Additionally, input to web applications comes from within HTTP requests. Correctly processing this input is difficult. The incorrect or missing input validation causes most vulnerabilities in web applications.

Network firewalls, network vulnerability scanners, and the use of Secure Socket Layer (SSL) do not make a web site secure[7]. The Gartner Group estimates that over 70% of attacks against a company's web site or web application come at the application layer, not the network or system layer[22].
 
Web application scanners help reduce the number of vulnerabilities in web applications. Briefly, web application scanners crawl through a web application’s pages and search the application for vulnerabilities by simulating attacks on it.
While web application scanners can find many vulnerabilities, they alone cannot provide evidence that an application is secure. Web application scanners are applied late in the software development life cycle. Security must be designed and built in. Different types of tools and best practices must be applied throughout the development life cycle[11].

Currently, there is no agreement about what a web application scanner is. To enable objective comparison of different tools, the required functionality of web application scanner must be clearly identified.
We define “web application scanner” and present some vulnerabilities that this tool class should detect. This work is a part of the NIST SAMATE project.
1.1. The SAMATE project  
The Software Assurance Metrics and Tool Evaluation (SAMATE) [23] project intends to provide a measure of confidence in the software tools used for software assurance. Part of the SAMATE project is the identification and measurement of software security assurance tools, including web application scanners.
When we have chosen a particular class of tools to work on, we begin by writing a specification. The specification typically consists of an informal list of features, and then more formally worded requirements for features, both mandatory and optional. For each tool class, we recruit a focus group to review and advise on specifications. We also develop a test plan and test sets to check that the tool is indeed capable of satisfying a set of mandatory requirements.
Currently, we are developing a specification and test plan for source code analyzers. We also plan to develop a specification for web application scanners.
1.2. Definitions   Often, different terms are used to refer to the same concept in security literature. Different authors may use the same term to refer to different concepts. For clarity we give our definitions.
Software assurance is the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards and procedures in order to help achieve:
  • Trustworthiness – no exploitable vulnerabilities exist either of malicious or unintended origin, and
  • Predictable execution – justifiable confidence that software, when executed, functions as intended.
In general, a software security assurance (SSA) tool is an automated piece of software that detects or prevents security weaknesses and vulnerabilities.

Weaknesses in requirements, design, implementation, or operation may have either direct or indirect impact on security. In what follows, we use the terms “weakness” and “security weakness” interchangeably.
A weakness may result in a vulnerability, that is, a possibility of harming the system. A weakness may be the lack of program instructions, for example, lack of a check for buffer size. Since a weakness may or may not result in a vulnerability, we use the term "weakness" instead of "flaw" or "defect". Often, vulnerability is caused by a combination of weaknesses.

A false positive is a situation where a tool reports correct behavior as vulnerability.

To accurately determine how well a tool checks for weaknesses, one must begin with a taxonomy of weaknesses. Several security weakness classification schemes have been proposed. The latest attempt at unifying the schemes is the Common Weakness Enumeration (CWE). 

1.3. A taxonomy of SSA tool classes

As the first step in identification of SSA tools, we need a taxonomy, or classification, of SSA tools and techniques in order to prioritize our effort.

We started by asking what classes of tools are currently used to identify potential vulnerabilities in software. We then asked what capabilities a tool should have to be placed into a particular class of tools. A taxonomy, is organized around four facets: software development life cycle phase (from requirements to operation), automation level (from manual to fully automated), approach (preclude, detect, mitigate, react), and viewpoint (external vs. internal).

2. What is a web application? The Web Application Security Consortium (WASC) defines a web application as "a software application, executed by a web server, which responds to dynamic web page requests over HTTP."

A web application is comprised of a collection of scripts, which reside on a web server and interact with databases or other sources of dynamic content. Using the infrastructure of the Internet, web applications allow service providers and clients to share and manipulate information in a platform-independent manner. For a good introduction to web application from the penetration tester’s perspective.
The technologies used to build web applications include PHP, Active Server Pages (ASP), Perl, Common Gateway Interface (CGI), Java Server Pages (JSP), JavaScript, VBScript, etc. Some of the broad categories of web application technologies are communication protocols, formats, server-side and client-side scripting languages, browser plug-ins, and web server API.
A web application has a distributed n-tiered architecture. Typically, there is a client (web browser), a web server, an application server (or several application servers), and a persistence (database) server. Figure 2 presents a simplified view of a web application. There may be a firewall between web client and web server.
 
2.1. Sources of vulnerabilities in web applications
Web applications typically interact with the user via FORM (buttons, text boxes, etc.) elements and GET or POST variables. The incorrect processing of data elements within the HTTP requests causes most critical vulnerabilities in the web applications. While SSL ensures secure data transfer, it does not prevent these vulnerabilities because it transmits HTTP requests without scrutiny.

Web applications are a gateway to databases that hold critical application data and assets. Some of the main threats to the database server tier include SQL injection, unauthorized server access and password cracking. Most SQL injection vulnerabilities result from poor input validation.

Most web applications store sensitive information in databases or on a file system. Developers often make mistakes in the use of cryptographic techniques to protect this information.

Since HTTP is a stateless protocol, web applications use separate mechanisms to maintain session state. A session is a series of interactions between user and web application during a single visit to the web site. Typically, session management is done through the use of a pseudo-unique string called Session ID, which gets transmitted to the web server with every request. Most web scripting languages support sessions via GET variables and/or cookies. If an attacker can guess or steal a session ID, he can manipulate another user’s session.
We provide a list of vulnerabilities in Section 4.1.

3. What is a web application scanner?
A web application scanner is an automated program that examines web applications for security vulnerabilities. In addition to searching for web application specific vulnerabilities, the tools also look for software coding errors, such as illegal input strings and buffer overflows.
Web application scanner explores an application by crawling through its web pages and performs penetration testing - an active analysis of a web application by simulating attacks on it. This involves generation of malicious inputs and subsequent evaluation of application’s response. Web application scanner performs different types of attack. A generally useful attack, called fuzzing, is submitting random inputs of various sizes to the application.
Penetration testing is a black-box testing approach. The limitation of this approach is its inability to examine source code, thus it is unlikely to detect such vulnerabilities as back doors. However, it is well suited for detecting input validation problems. Additionally, client-side code (JavaScript, etc.) is available to the penetration tester and can provide important information about the inner workings of a Web application.
Some instances of commercial web application scanners are listed below. This list is obtained from references [5,25,6] and web sites.
  • AppScan [29]
  • WebKing [20]
  • WebInspect [26]
  • NTOspider [16]
3.1. Other web application security tool types

We contrast web application scanner with some other approaches and point out their differences.
A web application firewall, sometimes called wrapper, is a tool that examines HTTP requests and responses for application specific vulnerabilities. It is used primarily during system operation phase, whereas web application scanners are used primarily during testing phase. Also, web application scanner performs active detection by simulating attacks, whereas web application firewall mitigates vulnerabilities.
Although web application firewall can be used to detect vulnerabilities by examining saved attack information, the detection is passive. That is, nothing is detected until and unless an attack triggers a response indicating a vulnerability.
Source code analysis is a white-box testing approach that scans the application source code for security weaknesses. Source code scanners are primarily used during the implementation phase of the software development life cycle. Some source code scanners can detect web application specific vulnerabilities.
Using a framework is another approach. Frameworks assist coders and security analysts in the process of testing their Web applications, either by providing an interface that exposes the internals of the HTTP traffic, or by helping create automated tests for custom Web applications.
No single approach is sufficient to make web applications secure: different types of tools must be used at different stages of the development life cycle, starting with the early phases. Below are some instances of web security tools which are not web application scanners.
  • NC2000 [15] is an application gateway. It is a physical box that is placed in front of a web server and examines the traffic to/from the web application. 
  • Nessus [27] is an open source scanner that supports a plugin architecture allowing users to develop security checks with the NASL (Nessus Attack Scripting Language). 
  • WebScarab [19] is a framework for analyzing applications that communicate using the HTTP and HTTPS protocols. It observes the conversations (requests and responses) and allows the operator to review them. It provides a number of plugins, mainly aimed at security functionality. Plugins perform one of two tasks: generate requests or analyze conversations.
3.2. Other types of information security tools
SANS Institute [25] classifies the information security tools into the following five categories:
  1. Blocking attacks: Network based (includes secure web filtering)
  2. Blocking attacks: Host based
  3. Eliminating security vulnerabilities (includes penetration testing and application security testing)
  4. Safely supporting authorized users
  5. Tools to minimize business losses and maximize effectiveness
Web application scanners are in category 3. The class of web application scanners consists of tools that detect potential vulnerabilities in the web applications only, and not on the network. In addition to web application scanners, the overall security defense should include tools for web services, database scanners, network firewalls, anti-virus gateways, routers, intrusion detection/protection systems, and other tools.

4. Functional requirements for web application scanner
To develop a specification for web application scanners, we must clearly define a set of functions that a tool must successfully perform. A web application scanner must:

  • Identify a selected set of software security vulnerabilities in a web application. 
  • Generate a text report indicating an action (or a sequence of actions) that leads to vulnerability. 
  • Generate an acceptably low ratio of false positives.
4.1. Some web application vulnerabilities
In this section, we identify a list of vulnerabilities that a web application scanner should detect. This list will form the basis for a formally worded requirement for mandatory features for a web application scanner. An extensive classification of web security threats can be found in [30]. The Open Web Application Security Project (OWASP) publishes the list of the most critical web application vulnerabilities [17]. These and other efforts are being incorporated into CWE [4].
Input validation weaknesses cause most web application vulnerabilities. Other types of weaknesses include use of poor authentication mechanisms, logic weaknesses, unintentional disclosure of content and environment information, and low-level coding weaknesses (such as buffer overflows). Often, vulnerability is caused by a combination of weaknesses. Some common vulnerabilities and attacks are:
  • Cross-site scripting (XSS) vulnerabilities. The vulnerability occurs when an attacker submits malicious data to a web application. Examples of such data are client-side scripts and hyperlinks to an attacker’s site. If the application gathers the data without proper validation and dynamically displays it within its generated web pages, it will display the malicious data in a legitimate user’s browser. As a result, the attacker can manipulate or steal the credentials of the legitimate user, impersonate the user, or execute malicious scripts on the user’s machine.
  • Injection vulnerabilities. This includes data injection, command injection, resource injection, and SQL injection. SQL Injection occurs when a web application does not properly filter user input and places it directly into a SQL statement. This can allow disclosure and/or modification of data in the database. Another possible object of injection is executable scripts, which can be coerced into doing things that their authors did not anticipate.
  • Cookie poisoning is a technique mainly for achieving impersonation and breach of privacy through manipulation of session cookies, which maintain the identity of the client. By forging these cookies, an attacker can impersonate a valid client, and thus gain information and perform actions on behalf of the victim.
  • Unvalidated input. XSS, SQL Injection, and cookie poisoning vulnerabilities are some of the specific instances of this problem. In addition, it includes tainted data and forms, improper use of hidden fields, use of unvalidated data in array index, in function call, in a format string, in loop condition, in memory allocation and array allocation.
  • Authentication, authorization and access control vulnerabilities could allow malicious user to gain control of the application or backend servers. This includes weak password management, use of poor encryption methods, use of privilege elevation, use of insecure macro for dangerous functions, use of unintended copy, authentication errors, and cryptographic errors.
  • Incorrect error handling and reporting may reveal information thus opening doors for malicious users to guess sensitive information. This includes catch NullPointerException, empty catch block, overly-broad catch block and overly-broad “throws” declaration. 
Some other vulnerabilities are:
  • Denial of service
  • Path manipulation
  • Broken session management
  • Synchronization timing problems
More work is needed to refine the list of vulnerabilities that the web application scanners must support. 

5. Issues in testing web application scanners
In addition to a functional specification, we need a test plan and a suite (or several suites) of test cases to check that a web application scanner satisfies the specification.

A test plan details how a tool is tested, how to interpret test results, and how to summarize or report tests. Currently, tools produce reports in a variety of formats. A common reporting format would make it easier to automate comparison of different tools.

We measure conformance of a tool to the specification by running it against a variety of test cases. In choosing test cases, it is important to understand the ways in which an attacker exploits vulnerabilities.

In normal operation, a user submits a request to the web application and gets a response back. An attacker submits an unexpected request to an application in hopes of exploiting an existing vulnerability. The goal of an attacker is to violate application’s security policy. The attacker recognizes the existence of vulnerability either by examining application’s response or indirectly, by noticing changes in application’s behavior (this may include probing different parts of the application). Web application scanner works by simulating attacker’s action.

To test web application scanners, we need web applications with vulnerabilities. For each vulnerability class, there must be at least one test application that exhibits it. Small test cases with a single vulnerability can be used to precisely test tools’ ability to detect specific vulnerabilities. Large applications with a variety of vulnerabilities, such as WebGoat [18], will test scalability of a tool for real life applications. It is also important to test tools’ ability to detect vulnerabilities in web applications built using different web technologies.

A basic test suite may contain only applications with easily exploitable vulnerabilities. For instance, if an application does no input validation at all, there are many ways to exploit the vulnerability and most tools can find it. However, to thoroughly test a scanner, we need programs with subtle vulnerabilities.

Different types of SQL injection represent another example. An attacker typically sends a request to cause the application to generate a SQL query that can induce unexpected behavior. Then the attacker examines the error message returned to the web client. A typical mitigation approach is to prevent the application from displaying any database error messages. The vulnerability, though harder to detect, still exists – it is called “blind SQL injection”.

In order to check for false positives, we need test cases that are free of vulnerabilities but have some features that cause difficulty for web application scanners. Generation of such test cases is an interesting research problem that requires understanding the way the tools work.

While developing test suites, we collect much larger numbers of candidate test cases. This collection, the SAMATE Reference Dataset (SRD) [23], is freely accessible on-line. We intend the database to support empirical research of software assurance. It contains over 1,600 test cases for source code analysis tools (as of August 18, 2006). We intend to add many test cases for web application scanners. We welcome participation from researchers and companies.

6. Summary
We defined web application scanners and presented some vulnerabilities that this class of tools should detect. We plan to develop a specification for web application scanners. The specification will give a precise definition of functions that the tools in this class must perform. We will develop suites of test cases to measure conformance of tools to the specification. This will enable more objective comparison of web application scanners and stimulate their improvement.

7. Acknowledgments
We thank Jeffrey Meister, Paul E. Black, and Eric Dalci for improving our understanding of web application scanners and many helpful suggestions on this paper. We also thank the anonymous reviewers for their insightful comments.

8. References
[1] A. Avizienis, J-C. Laprie, B. Randell, and C. Landwehr, “Basic Concepts and Taxonomy of Dependable and Secure Computing,” IEEE Trans. on Dependable and Secure Computing, 1(1):11-33, Jan-Mar 2004.

[2] M. Bishop and D. Bailey, “A Critical Analysis of Vulnerability Taxonomies,” Technical Report 96-11, Department of Computer Science, University of California at Davis, Sep. 1996.


[3] Black, Paul E. and Fong, Elizabeth, “Proceedings of Defining the State of the Art in Software Security Tool Workshop,” NIST Special Publication 500-264, September 2005.

[4] Common Weakness Enumeration (CWE), MITRE, http://cve.mitre.org/cwe/

[5] DISA, Application Security Tool Assessment Survey, V3.0, July 29, 2004. (To be published as STIG)

[6] Arian J. Evans, “Software Security Quality: Testing Taxonomy and Testing Tools Classification,” Presentation viewgraph for OWASP APPSec DC, October 2005.

[7] Jeremiah Grossman, The Five Myths of Web Application Security, WhiteHat Security, Inc, 2005.

[8] Michael Howard, David LeBlanc, and John Viega, 19 Deadly Sins of Software Security. McGraw-Hill Osborne Media, July 2005.

[9] Andrew J. Kornecki and Janusz Zalewski, The Qualification of Software Development Tools From the DO-178B Certification Perspective, CrossTalk, pages 19-23, April 2006

[10] C. E. Landwehr, A. R. Bull, J. P. McDermott, and W. S. Choi, “A Taxonomy of Computer Program Security Flaws,” Information Technology Division, Naval Research Laboratory, Washington, D. C., September 1994.

[11] G. McGraw, Software Security: Building Security In, Addison-Wesley Software Security Series, 2006.

[12] Jody Melbourne and David Jorm, Penetration Testing for Web Applications, in SecurityFocus, 2003.

[13] NASA Software Assurance Guidebook and Standard, http://satc.gsfc.nasa.gov/assure/assurepage.html

[14] National Vulnerability Database (NVD), http://nvd.nist.gov/

[15] Netcontinuum, NC2000, http://netcontinuum.com/products/

[16] NT Objectives, NTOSpider, http://www.ntobjectives.com/products/ntospider.php

[17] OWASP, “The Ten Most Critical Web Application Security Vulnerabilities,” http://www.owasp.org/index.php/OWASP_Top_Ten_Project

[18] OWASP, WebGoat Project, http://www.owasp.org/software/webgoat.html.
 
[19] OWASP, WebScarab http://www.owasp.org/software/webscarab/
 
[20] Parasoft, WebKing, http://www.parasoft.com/webking

[21] F. Piessens. “A taxonomy (with examples) of software vulnerabilities in Internet software,” Report CW 346, Katholieke University Leuven, 2002.

[22] Prescatore, John, Gartner, quoted in Computerworld, Feb. 25, 2005, http://www.computerworld.com/printthis/2005/0,4814,99981,00.html

[23] SAMATE project, http://samate.nist.gov/

[24] SAMATE Tool Taxonomy, http://samate.nist.gov/index.php/Tool_Taxonomy

[25] SANS Institute, http://www.sans.org/whatworks

[26] SPI Dynamics, WebInspect, http://www.spidynamics.com/products/webinspect/

[27] Tenable Network Security, Nessus, http://www.nessus.org/about/

[28] K. Tsipenyuk, B. Chess, and G. McGraw, “Seven Pernicious Kingdoms: A Taxonomy of Software Security Errors,” Proc. NIST Workshop on Software Security Assurance Tools, Techniques, and Metrics (SSATTM), US National Institute of Standards and Technology, 2005.

[29] Watchfire, AppScan, http://www.watchfire.com/products/appscan/

[30] Web Application Security Consortium, “Threat Classification,” http://www.webappsec.org/projects/threat/

[31] Web Application Security Consortium Glossary, http://www.webappsec.org/projects/glossary/ 

String Format for DateTime [C#]

This example shows how to format DateTime using String.Format method. All formatting can be done also using DateTime.ToString method.

Custom DateTime Formatting

There are following custom format specifiers y (year), M (month), d (day), h (hour 12), H (hour 24), m (minute), s (second), f (second fraction), F (second fraction, trailing zeroes are trimmed), t (P.M or A.M) and z (time zone).

Following examples demonstrate how are the format specifiers rewritten to the output.
// create date time 2008-03-09 16:05:07.123
DateTime dt = new DateTime(2008, 3, 9, 16, 5, 7, 123);
String.Format("{0:y yy yyy yyyy}", dt);
// "8 08 008 2008" year
String.Format("{0:M MM MMM MMMM}", dt);
// "3 03 Mar March" month
String.Format("{0:d dd ddd dddd}", dt);
// "9 09 Sun Sunday" day
String.Format("{0:h hh H HH}", dt);
// "4 04 16 16" hour 12/24
String.Format("{0:m mm}", dt);
// "5 05" minute
String.Format("{0:s ss}", dt);
// "7 07" second
String.Format("{0:f ff fff ffff}", dt);
// "1 12 123 1230" sec.fraction
String.Format("{0:F FF FFF FFFF}", dt);
// "1 12 123 123" without zeroes
String.Format("{0:t tt}", dt);
// "P PM" A.M. or P.M.
String.Format("{0:z zz zzz}",
dt); // "-6 -06 -06:00" time zone

You can use also date separator / (slash) and time sepatator : (colon). These characters will be rewritten to characters defined in the current DateTimeForma­tInfo.DateSepa­rator and DateTimeForma­tInfo.TimeSepa­rator.

// date separator in german culture is "." (so "/" changes to ".")
String.Format("{0:d/M/yyyy HH:mm:ss}", dt); // "9/3/2008 16:05:07"
- english (en-US)
String.Format("{0:d/M/yyyy HH:mm:ss}", dt); // "9.3.2008 16:05:07"
- german (de-DE)

Here are some examples of custom date and time formatting:

// month/day numbers without/with leading zeroes
String.Format("{0:M/d/yyyy}", dt);
// "3/9/2008"
String.Format("{0:MM/dd/yyyy}", dt);
// "03/09/2008"
// day/month names
String.Format("{0:ddd, MMM d, yyyy}", dt);
// "Sun, Mar 9, 2008"
String.Format("{0:dddd, MMMM d, yyyy}", dt);
// "Sunday, March 9, 2008"
// two/four digit year
String.Format("{0:MM/dd/yy}", dt);
// "03/09/08"
String.Format("{0:MM/dd/yyyy}", dt);
// "03/09/2008"

Standard DateTime Formatting

In DateTimeForma­tInfo there are defined standard patterns for the current culture. For example property ShortTimePattern is string that contains value h:mm tt for en-US culture and value HH:mm for de-DE culture.

Following table shows patterns defined in DateTimeForma­tInfo and their values for en-US culture. First column contains format specifiers for the String.Format method.

Specifier DateTimeFormatInfo property Pattern value (for en-US culture)
t ShortTimePattern h:mm tt
d ShortDatePattern M/d/yyyy
T LongTimePattern h:mm:ss tt
D LongDatePattern dddd, MMMM dd, yyyy
f (combination of D and t) dddd, MMMM dd, yyyy h:mm tt
F FullDateTimePattern dddd, MMMM dd, yyyy h:mm:ss tt
g (combination of d and t)M/d/yyyy h:mm tt
G (combination of d and T)M/d/yyyy h:mm:ss tt
m, M MonthDayPattern MMMM dd
y, Y YearMonthPattern MMMM, yyyy
r, R RFC1123Pattern ddd, dd MMM yyyy HH':'mm':'ss 'GMT' (*)
s SortableDateTi­mePattern yyyy'-'MM'-'dd'T'HH':'mm':'ss (*)
u UniversalSorta­bleDateTimePat­tern yyyy'-'MM'-'dd HH':'mm':'ss'Z' (*)
(*) = culture independent

Following examples show usage of standard format specifiers in String.Format method and the resulting output.

String.Format("{0:t}", dt); // "4:05 PM"
ShortTime
String.Format("{0:d}", dt); // "3/9/2008"
ShortDate
String.Format("{0:T}", dt); // "4:05:07 PM"
LongTime
String.Format("{0:D}", dt); // "Sunday, March 09, 2008"
LongDate
String.Format("{0:f}", dt); // "Sunday, March 09, 2008 4:05 PM"
LongDate+ShortTime
String.Format("{0:F}", dt); // "Sunday, March 09, 2008 4:05:07 PM"
FullDateTime
String.Format("{0:g}", dt); // "3/9/2008 4:05 PM"
ShortDate+ShortTime
String.Format("{0:G}", dt); // "3/9/2008 4:05:07 PM"
ShortDate+LongTime
String.Format("{0:m}", dt); // "March 09"
MonthDay
String.Format("{0:y}", dt); // "March, 2008"
YearMonth
String.Format("{0:r}", dt); // "Sun, 09 Mar 2008 16:05:07 GMT"
RFC1123
String.Format("{0:s}", dt); // "2008-03-09T16:05:07"
SortableDateTime
String.Format("{0:u}", dt); // "2008-03-09 16:05:07Z"
UniversalSortableDateTime

Secure Your ASP.NET Application from a SQL Injection Attack

What is a SQL Injection Attack?
A SQL Injection Attack is when an attacker is able to execute potentially malicious SQL commands by putting SQL queries into web form input or the query string of a page request. Input forms where user or query string input directly affects the building of dynamic SQL queries or stored procedure input parameters are vulnerable to such an attack. A common scenario is as follows:
  • A web application has a login page through which access to the application is controlled. The login page requires a login and password to be provided.  
  • The input from the login page is used to build a dynamic SQL statement or as direct input to a stored procedure call. The following code is an example of what could be used to build the query: 
 System.Text.StringBuilder query =
new System.Text.StringBuilder(
"SELECT * from Users WHERE login = '")
.Append(txtLogin.Text).Append("' AND password='")
.Append(txtPassword.Text).Append("'");
  • The attacker enters input such as "' or '1'='1" for the login and the password.  
  • The resulting SQL dynamic statement becomes something similar to: "SELECT * from Users WHERE login = '' or '1'='1' AND password = '' or '1'='1'". 
  • The query or stored procedure is then executed to compare the inputted credentials with those persisted in the database. 
  • The query is executed against the database and the attacker is incorrectly granted access to the web application because the SQL command was altered through the injected SQL statement.
Knowing that there is a relatively good chance the application is going to take the input and execute a search to validate it, an attacker is able to enter a partial SQL string that will cause the query to return all users and grant them access to the application.
What could someone do to my application?
The amount of damage an attacker could do is different for each environment. It mainly depends upon the security privileges under which your application is accessing the database. If the user account has administrator or some elevated privileges, then the attacker could do pretty much whatever they wanted to the application database tables, including adding, deleting, or updating data or even potentially dropping tables altogether.
How do I prevent it?
  • The good news is that preventing your ASP.NET application from being susceptible to a SQL Injection Attack is a relatively simple thing to do. You must filter all user input prior to using it in a query statement. The filtering can take on many forms.  
    • If you are using dynamically built queries, then employ the following techniques: 
    • Delimit single quotes by replacing any instance of a single quote with two single quotes which prevents the attacker from changing the SQL command. Using the example from above, "SELECT * from Users WHERE login = ''' or ''1''=''1' AND password = ''' or ''1''=''1'" has a different result than "SELECT * from Users WHERE login = '' or '1'='1' AND password = '' or '1'='1'". 
    • Remove hyphens from user input to prevent the attacker from constructing a query similar to: SELECT * from Users WHERE login = 'mas' -- AND password ='' that would result in the second half of the query being commented out and ignored. This would allow an attacker that knows a valid user login to gain access without knowing the user's password. 
  • Limit the database permissions granted to the user account under which the query will be executing. Use different user accounts for selecting, inserting, updating, and deleting data. By separating the actions that can be performed by different accounts you eliminate the possibility that an insert, update, or delete statement could be executed in place of a select statement or vice versa.  
  • Setup and execute all queries as stored procedures. The way SQL parameters are passed prevents the use of apostrophes and hyphens in a way that would allow an injection attack to occur. In addition, it allows database permissions to be restricted to only allow specific procedures to be executed. All user input must then fit into the context of the procedure being called and it is less likely an injection attack could occur.  
  • Limit the length of the form or query string input. If your login is 10 characters long, then make sure you don't allow more characters than that to be input for the value. This will make it more difficult to inject potentially harmful SQL statements into the input. 
  • Perform validation on the user input to verify the input is limited to desired values. Data validation should be performed at both the client and the server. The server side validation is required to avoid a security weakness exposed by the client side validation. It is possible for an attacker to access and save your source code, modify your validation scripts (or simply remove them), and submit the form to your server with inappropriate data. The only way to be absolutely sure that validation has been performed is to perform validation on the server as well. There are a number of pre-built validation objects such as RegularExpressionValidator that can auto generate the client side script to perform validation, and allow you to hook in a server side method as well. If you don't find one that meets your needs within the palette of available validators, you can create your own using the CustomValidator. 
  • Store data such as user logins and passwords in an encrypted format. Encrypt user input for comparison against the data stored in the database. The data is now being compared in a sanitized fashion that has no meaning to the database and prevents the attacker from injecting SQL commands. The System.Web.Security.FormsAuthentication class has a HashPasswordForStoringInConfigFile that is particularly useful in sanitizing user input. 
  • Validate the number of rows returned from a query that is retrieving data. If you are expecting to retrieve a single row of data, then throw an error if multiple rows are retrieved.

AJAX Web Development Mega Pack [MOV-AVC1]



AJAX Web Development Mega Pack [MOV-AVC1]
MOV | AVC1 | 15fps 320kbps | 128kbps 44100HZ | 400MB


- AJAX Essential Training Lynda.com
- AJAX Crash Course by SitePoint

Ajax is an acronym for Asynchronous JavaScript and XML, and at its heart is the XMLHTTPRequest object, which is part of the XML DOM (Document Object Model).

The XML (Extensible Markup Language) Document Object Model defines a standard way for accessing and manipulating XML documents. The DOM enables JavaScript to completely access XML or XHTML documents by providing access to the elements which define the structure. The accessibility is possible through a set of intrinsic JavaScript objects that focus on DOM manipulation.It is required to parse the responses that we receive from the server side when we create an XMLHTTPRequest (XHR). As mentioned earlier, the XHR is the core of the Ajax model and without it the model would not exist. This is the piece of the Ajax puzzle that has created the recent buzz because it allows HTTP requests to be made to the server without refreshing the browser.

Download (filesonic)
http://www.filesonic.com/file/78080875/AJAX.part1.rar
http://www.filesonic.com/file/78080902/AJAX.part2.rar
http://www.filesonic.com/file/78185956/AJAX.part3.rar
http://www.filesonic.com/file/78186088/AJAX.part4.rar

martes, 19 de abril de 2011

Users: Reparacion de PC, Curso Practico


Todas las Herramientas para Administrar y Montar nuestro propio Negocio.
Reparación de PC es el mejor curso para aprender a armar computadoras de manera profesional. Capacítese, aprenda a comprar, presupuestar y armar PCs, con el respaldo teórico y práctico que unicamente los especialistas de USERS pueden brindar.
CD#01: Videos
Un material imperdible para convertirse en un verdadero profesional en la reparación de los componentes internos y externos de la PC.

CD#02 Software
Discos Rígidos
Mantenimiento
Memoria RAM
Monitores
Motherboard
PDA
Placa
Placas de Video
¡Y más!

CD#03: Herramientas Complementarias
Glosario, FAQs y una completa recopilación de esquemas y diagramas de circuitería de los dispositivos más populares.
Download



CD1

http://depositfiles.com/files/6822271/Users.Reparacion.de.PC.CD1.part1.rar
http://depositfiles.com/files/6822774/Users.Reparacion.de.PC.CD1.part2.rar
http://depositfiles.com/files/6822551/Users.Reparacion.de.PC.CD1.part3.rar
http://depositfiles.com/files/6822278/Users.Reparacion.de.PC.CD1.part4.rar
http://depositfiles.com/files/6822773/Users.Reparacion.de.PC.CD1.part5.rar
http://depositfiles.com/files/6822714/Users.Reparacion.de.PC.CD1.part6.rar

CD2

http://depositfiles.com/files/6822703/Users.Reparacion.de.PC.CD2.part1.rar
http://depositfiles.com/files/6822699/Users.Reparacion.de.PC.CD2.part2.rar
http://depositfiles.com/files/6822717/Users.Reparacion.de.PC.CD2.part3.rar
http://depositfiles.com/files/6822709/Users.Reparacion.de.PC.CD2.part4.rar
http://depositfiles.com/files/6822882/Users.Reparacion.de.PC.CD2.part5.rar
http://depositfiles.com/files/6823244/Users.Reparacion.de.PC.CD2.part6.rar
http://depositfiles.com/files/6822762/Users.Reparacion.de.PC.CD2.part7.rar

CD3

http://depositfiles.com/files/6822864/Users.Reparacion.de.PC.CD3.part1.rar
http://depositfiles.com/files/6822760/Users.Reparacion.de.PC.CD3.part2.rar
http://depositfiles.com/files/6822723/Users.Reparacion.de.PC.3CDs.sfv

DOMINANDO PHOTOSHOP CS3 AVANZADO

Este Curso DOMINANDO PHOTOSHOP CS3 AVANZADO de nuestro amigo S@C que publico en su blog, pueden visitarlo donde podran encontrar Tutoriales a Full de distintos temas.





DOMINANDO PHOTOSHOP CS3 AVANZADO
==================================
ISO | 2.47 GB | Spanish | PC | MAC


Como usar Photoshop para impactar a tus clientes siempre y ser la envidia de tus colegas. El curso esta dividido en 12 categorías, 40 temas y 70 videos para transformar radicalmente y de una vez por todas la forma en que diseñas. Se enseña técnicas avanzadas para Photoshop CS, CS2 y CS3, explicadas en lenguaje claro y natural. Imprescindible para diseñadores que quieren llevar sus creaciones al próximo nivel. El autor argumenta que a recopilado todas sus técnicas, secretos y trucos avanzados en este impactante, completo y divertido curso en DVD “Dominando Photoshop CS3 Avanzado”
PROYECTOS QUE SE REALIZARAN EN EL CURSO
Estilo High Tech: Cómo crear composiciones con aspecto “tecnológico” que no se vean aburridas ni predecibles.


Estilo Clásico: Creamos desde cero una composición con estilo “refinado y exclusivo”. Sorprendente lo sencillo que resulta con este video.

Estilo Infantil: Diseños para los más chicos, composiciones divertidas y coloridas (pero profesionales) dentro de Photoshop.

Estilo Dark: Para diseños con ese aspecto amenazador y poco amigable, pero necesario a veces.

DOWNLOAD


Introduction to Data Protection

Extracted from Data Protection and Information Lifecycle Management

1. What Does Data Protection Mean?
Data protection is just what it sounds like: protecting important data from damage, alteration, or loss. Although that sounds simple enough, data protection encompasses a host of technology, business processes, and best practices. Different techniques must be used for different aspects of data protection. For example, securing storage infrastructure is necessary to ensure that data is not altered or maliciously destroyed. To protect against inadvertent data loss or permanent corruption, a solid backup strategy with accompanying technology is needed.
The size of an enterprise determines which practices, processes, or technologies are used for data protection. It is not reasonable to assume that a small business can deploy expensive, high-end solutions to protect important data. On the other hand, backing up data to tape or disk is certainly something that any enterprise can do. A large enterprise will have both the resources and the motivation to use more advanced technology.
The goal is the same no matter what the size or makeup of the company. Data protection strives to minimize business losses due to the lack of verifiable data integrity and availability.
The practices and techniques to consider when developing a data protection strategy are:
  • Backup and recovery: the safeguarding of data by making offline copies of the data to be restored in the event of disaster or data corruption.
  • Remote data movement: the real-time or near-real-time moving of data to a location outside the primary storage system or to another facility to protect against physical damage to systems and buildings. The two most common forms of this technique are remote copy and replication. These techniques duplicate data from one system to another, in a different location.
  • Storage system security: applying best practices and security technology to the storage system to augment server and network security measures.
  • Data Lifecycle Management (DLM): the automated movement of critical data to online and offline storage. Important aspects of DLM are placing data considered to be in a final state into read-only storage, where it cannot be changed, and moving data to different types of storage depending on its age.
  • Information Lifecycle Management (ILM): a comprehensive strategy for valuing, cataloging, and protecting information assets. It is tied to regulatory compliance as well. ILM, while similar to DLM, operates on information, not raw data. Decisions are driven by the content of the information, requiring policies to take into account the context of the information. 
All these methods should be deployed together to form a proper data protection strategy.

2. A Model for Information, Data, and Storage

Traditionally, storage infrastructure was viewed differently from the data and information that was placed on it. A new, unified model has emerged that ties together hardware, management, applications, data, and information. As Figure 1-1 shows, the entire spectrum from devices through information can be thought of as a series of layers, each building upon the others and providing more advanced services at each layer

The model begins with the traditional world of storage: the hardware. The hardware or device layer includes all the hardware components that comprise a storage system, including disks and tapes up to entire Storage Area Networks (SAN).
Next is the management layer. This layer is comprised of all the tools for managing the hardware resources. Some typical functions of this layer include device and network management, resource management, network analysis, and provisioning.
The data management layer consists of tools and techniques to manage data. Some typical functions within this layer are backup and recovery, remote copy, replication, and Data Lifecycle Management practices.
The final piece of the model, and the uppermost layer, is the information management layer. This layer addresses the difference between information and data: context. Business practices such as Information Lifecycle Management look at what a collection of data means and manages it accordingly.
Data protection cuts across all levels of the model. A successful data protection strategy will take into account the hardware, especially its security and configuration. The management layer is less pronounced in the data protection strategy, because it mainly serves the hardware. The data management layer is heavily involved, and the information management portion ties many aspects of data protection together while filling in significant gaps.
While reading the rest of this book, keep this model in mind. It will help provide a framework for thinking about data protection.

3. Why Is Data Protection Important to the Enterprise?

There are several reasons for spending money, time, and effort on data protection. The primary one is minimizing financial loss, followed by compliance with regulatory requirements, maintaining high levels of productivity, and meeting customer expectations. As computers have become more and more integral to business operations, data requirements from regulators such as the U.S. Securities and Exchange Commission (SEC), as well as from customers, have been imposed on businesses. There is a clear expectation that important data be available 24 hours a day, 7 days a week, 365 days a year. Without a working data protection strategy, that isn't possible.
The single most important reason to implement data protection strategies is fear of financial loss. Data is recognized as an important corporate asset that needs to be safeguarded. Loss of information can lead to direct financial losses, such as lost sales, fines, or monetary judgments. It can also cause indirect losses from the effects of a drop in investor confidence or customers fleeing to competitors. Worse yet, stolen or altered data can result in financial effects that are not known to the company until much later. At that point, less can be done about it, magnifying the negative results.
Another important business driver for data protection is the recent spate of regulations. Governments throughout the world have begun imposing new regulations on electronic communications and stored data. Businesses face dire consequences for noncompliance. Some countries hold company executives criminally liable for failure to comply with laws regarding electronic communications and documents. These regulations often define what information must be retained, for how long, and under what conditions. Other laws are designed to ensure the privacy of the information contained in documents, files, and databases. Loss of critical communications can be construed as a violation of these regulations and may subject the corporation to fines and the managers to legal action.
A third driver, which does not get the attention of the press but is important to organizations nonetheless, is productivity. Loss of important data lowers overall productivity, as employees have to deal with time-consuming customer issues without the aid of computer databases. Data loss also results in application failures and similar system problems, making it difficult for people to do their jobs. A poor data protection strategy may leave people waiting for long periods of time for systems to be restored after a failure. During that time, employees may be idle or able to work only in a reduced capacity, further diminishing productivity.
The demands of a 21st-century business are such that customers expect the business to operate at all times. In an increasingly global economy, downtime is not tolerated by customers, who can readily take their business elsewhere. The inability of a business to operate because of a data loss, even a temporary one, is driving many businesses to deploy extensive data protection schemes. It is not only the e-commerce world that experiences this situation. All types of businesses—including health care, financial, manufacturing, and service—operate around the clock, or at least their computer systems do. Even when no humans are around, computers are available to take and place orders, send orders to the warehouse, and manage financial transactions. Data protection strategies need to take into account these 24/7 expectations.

4. Data Loss and Business Risk

Risk is a measure of potential economic loss, lack of return on an investment or asset, or material injury. Another way to state this is that risk is a measure of exposure to harm. Some common risks are material loss (for example, damaged equipment, facilities, or products), risk to sales and revenue, lawsuits, project failure, and market risk. Risk is associated not only with hard assets, such as building or machinery, but also with revenue, customer loyalty, and investments in projects.
How risk is measured depends on the assets deemed to be at risk. In computer security circles, risk is usually a measure of threats (the capability and willingness for malicious behavior), vulnerability (the holes in the system that can be exploited), and harm (the damage that could be done by a threat exploiting a vulnerability). No matter how you measure risk, the most important component is harm. Without harm, there is no risk.
Insurance, locked cabinets, background checks, and currency hedges are ways that companies seek to minimize harm to their assets and the profitability of the business. If one thinks of information as being a corporate asset, protecting the underlying data is necessary to ensure the value of the asset and prevent its loss. Ultimately, data protection is about mitigating business risk by reducing the ability of some threat to do harm to mission-critical data.

The Effect of Lost Data on Business Operations
Companies recognize that data loss represents a business risk. Even if a monetary value is not assigned to the data, the negative effects on operations can be significant. In many cases, corporate operations can be so adversely affected that companies feel the need to mention the risk in regulatory filings and shareholder reports.
Three types of damage may occur because of data loss. First, data may be unrecoverable. In this case, important business records may be lost forever or available only in hard-copy form. Any business process that is dependent on that data will now be considerably hindered. This is the worst form of damage that can occur.
Next, data may be recoverable but may require considerable time to restore. This scenario—the most likely—assumes that data is backed up in some other place, separate from the primary source. This is a better situation than irrecoverable loss, but the data will be unavailable while recovery operations take place. In some cases, not all the data may be recovered. This is a common problem with data restored from nightly backups. Any data created during the day when the primary data was lost is not on the backup tapes and is lost forever.
Finally, while data is unavailable, either permanently or temporarily, applications not directly related to lost data may fail. This is especially true of relational databases that reference other databases. Loss of a central database of customer information, for example, may cause problems with the sales system because it references customer information. A loss of this type can result in cascade failures, in which several applications fail because of their dependence on another application's data.
Risk to Sales
A company may suffer measurable harm when data loss makes it impossible for it to interact with customers. The result is that the company will not realize sales and revenue.
E-mail has become a primary form of corporate communication. Losing an important e-mail or attachment may mean that a customer may not be serviced correctly; thus, sales are lost. This is especially true of companies that sell capital equipment to other companies. A hard drive crash on the e-mail server may cause an important bid to go undelivered. The salesperson may not even know that the bid was not received by the customers (because it is sitting in the Sent folder stored on a local hard drive) until the sale is lost.
As large companies have become more dependent on call centers, they have become equally dependent on the customer relationship management (CRM) systems that help them track customer issues and orders. This represents a risk to sales, revenue, and profitability. If this risk is realized—if the worst-case scenario comes true—the harm done to the business may be severe enough to propel it into bankruptcy.


Inability to Operate

Extreme data loss such as loss of an entire database, even temporarily, has been known to cause organizations to fail. A company may not be able to fulfill orders, update employee records, produce financial reports, manufacture goods, or provide services. It may not even have an operating phone system. Computer technology and the data associated with it are integrated into all aspects of an organization's operations. Because of this dependence on information technology, there is a clear risk that data loss can make it impossible for an organization to perform properly.

Even partial data loss can disrupt business operations and produce negative effects. Employees may be idled for long periods of time while data is re-created or recovered, reducing productivity. Applications may fail unexpectedly when referencing data that is no longer available. Essential reporting may be incomplete because component data is not available.
Loss of data also makes it difficult for managers to measure company operations. Most modern businesses rely on financial, market, and manufacturing metrics. Without the ability to gather and report on key business indicators, managers are running blind as to the health of the business. Destroyed, damaged, or altered data skews metrics and disrupts decision-making. The overall effect of this type of disruption is reduced revenue and higher expenses, leading to loss of profitability.

Lawsuits and Fines

There is potential for lawsuits and fines when a company experiences data loss. With shareholder lawsuits fairly common, failure to protect data could easily lead to litigation, especially if data loss can be tied to a negative change in the share price of the company's stock. A more likely scenario is that data loss will affect operations and sales, causing the business to underperform. This can then trigger shareholder suits.
Other types of legal action can result in adverse judgments for companies. Companies may be sued for failure to perform duties outlined in contracts or the inability to produce goods and services that have been paid for. A lost order record may result in a customer's suing for direct and collateral damages.
Regulators now have the power to impose data retention requirements on companies. Data retention requirements tell a company what data must be kept and for how long. Fines can be levied when these requirements are not met.
It is not enough simply to have good policies; the policies have to be followed up with good practices. In 1997, Prudential Insurance was fined heavily because it did not properly implement existing electronic document retention policies. This led to the destruction of electronic documents needed as evidence. There was no indication that employees willfully destroyed evidence—only that the company did not take sufficient action to ensure that it was preserved. Though Prudential had a good electronic document retention policy in place, its inability to implement it properly cost the company $1 million in fines.[2]
[2] The National Law Journal, November 3, 2003.

Damaging legal situations can occur when data loss causes financial information to be released late. Regulators, markets, and shareholders expect certain reporting to occur at previously announced intervals. When a company fails to meet these expectations, that failure often leads to fines, lawsuits, drops in price of the company's stock, or even delisting from financial markets.
All these situations represent financial harm to the business. As such, steps need to be taken to protect the business against the risk of lawsuits and fines
Theft of Information

Another type of harm that requires data protection is theft of corporate information. This may take the form of theft of secrets or a violation of private data. Theft of secrets happens when a thief is able to access internal company information vital to current and future operations. Some examples of this these secrets are product plans, product designs, and computer source code. The economic impact of theft of secrets is difficult to ascertain, because the harm is indirect and manifests itself over long periods of time.
Theft of private information, such as customer information, may have three effects:
  • Lawsuits may arise when it is known that this information has been stolen. Customers may sue for damages that result from the use of this confidential information. 
  • Regulators in some countries may be empowered to take criminal and civil action against a company that suffers such a breach. The European Union, for example, requires that "Member States shall provide that the controller must implement appropriate technical and organizational measures to protect personal data against accidental or unlawful destruction or accidental loss, alteration, unauthorized disclosure or access, in particular where the processing involves the transmission of data over a network, and against all other unlawful forms of processing."[3]
 [3] Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995, Section VIII, Article 17.
Other political entities have similar laws that require the safeguarding of information from destruction or breach.
  • Customers may refuse to do business with a company that allows such a theft of private information. It is reasonable to assume that a customer would not want to continue to do business with a company that has not taken adequate care to safeguard private information. 
Reasons for Data Loss

As one might expect, there are many reasons why a corporation might lose important data. Broadly, they can be broken into the following categories:
  • Disasters 
  • Security breaches 
  • Accidents or unintended user action 
  • System failure
Some data protection techniques can be applied to all these causes of data loss; others are better used for specific categories.

Disasters

Disasters are the classic data-loss scenario. Floods, earthquakes, hurricanes, and terrorists can destroy computer systems (and the data housed on them) while destroying the facilities they are kept in. All disasters are unpredictable and may not behave as forecast. The goal of data protection is to create an environment that shields against all types of disasters. What makes this difficult is that it is hard to predict what type of disaster to guard against, and it is too costly to guard against all of them. Companies guard against the disasters most likely to occur, though that is not always good enough. Until just a few years ago, most U.S. companies did not take into account terrorism when planning for disasters.
There are two classes of disasters: natural and manmade. Natural disasters are often large in scope, affecting entire regions. Earthquakes and hurricanes, with their ability to do widespread damage to infrastructure, are especially worrisome; they rarely provide enough time to develop a plan for data protection if one is not already in place. After the disaster begins, it is too late to try to save data.
Manmade disasters are often more localized and generally create much less damage. Fires are the most common manmade disaster, although many other manmade incidents can cause data loss, too. The worse manmade disaster resulting in widespread loss of data (and life) was the September 11, 2001, terrorist attack on the World Trade Center in New York City. The destruction of key computer systems and the harm that it wrought to the economy of the United States led the U.S. Securities and Exchange Commission and the Comptroller of the Currency to jointly issue policies[4] requiring that data be adequately protected against regional disasters.
[4] SEC Policy Statement [Release No. 34-48545; File No. S7-17-03].

Security Breaches

Accidental loss represents one of the most common data loss scenarios. End-users are often the culprits; they delete, overwrite, and misplace critical files or e-mails, often without knowing they've done so.
In the 1980s and early 1990s, it was not at all unusual for the help desk to get frantic calls from end-users who had reformatted their hard drives. Fortunately, changes in desktop operating systems have made accidental reformatting of a hard drive much more difficult, and it is now a rare event. Damaged or reformatted floppy or Zip drives are still a common problem, though this usually destroys only archive data. As other forms of mobile media, such as solid state memory devices, are used by more people, the likelihood of loss of data on these devices grows. And yes, people sometimes drop their smart media cards in their coffee.
Though IT personnel may feel frustrated by the silly errors end-users make that result in data loss, they are responsible for quite a few errors themselves. Botched data migrations, hastily performed database reconfigurations, and accidentally deleted system files are everyday occurrences in the IT world. One of the most common and most damaging IT errors occurs when a backup tape is overwritten. Not only is the previous data destroyed, but there is no good way to recover much of it. Also, quite a few backups are damaged due to sloppy storage practices.
The risk that the end-user represents is usually a recoverable one. Although it's a hassle to dig out backups and pull off individual files, it is still something that can be done if the data in question is important enough. Good habits, such as backing up files to file servers or automated backups and volume shadow copying (now part of the Windows operating system), can alleviate many of the effects of end-user data loss.
IT mistakes represent much greater risk. The effects of an IT accident are not limited to individuals; instead, they affect entire applications and systems, many of which are mission critical. Strict policies and controls are necessary to prevent these types of errors.

Accidental Data Loss
Accidental loss represents one of the most common data loss scenarios. End-users are often the culprits; they delete, overwrite, and misplace critical files or e-mails, often without knowing they've done so.

In the 1980s and early 1990s, it was not at all unusual for the help desk to get frantic calls from end-users who had reformatted their hard drives. Fortunately, changes in desktop operating systems have made accidental reformatting of a hard drive much more difficult, and it is now a rare event. Damaged or reformatted floppy or Zip drives are still a common problem, though this usually destroys only archive data. As other forms of mobile media, such as solid state memory devices, are used by more people, the likelihood of loss of data on these devices grows. And yes, people sometimes drop their smart media cards in their coffee.

Though IT personnel may feel frustrated by the silly errors end-users make that result in data loss, they are responsible for quite a few errors themselves. Botched data migrations, hastily performed database reconfigurations, and accidentally deleted system files are everyday occurrences in the IT world. One of the most common and most damaging IT errors occurs when a backup tape is overwritten. Not only is the previous data destroyed, but there is no good way to recover much of it. Also, quite a few backups are damaged due to sloppy storage practices.

The risk that the end-user represents is usually a recoverable one. Although it's a hassle to dig out backups and pull off individual files, it is still something that can be done if the data in question is important enough. Good habits, such as backing up files to file servers or automated backups and volume shadow copying (now part of the Windows operating system), can alleviate many of the effects of end-user data loss.
IT mistakes represent much greater risk. The effects of an IT accident are not limited to individuals; instead, they affect entire applications and systems, many of which are mission critical. Strict policies and controls are necessary to prevent these types of errors.


System Failure


System failures often cause data loss. The most famous type of failure is a hard drive crash. Although hard drives don't fail with the frequency that they used to, failures are still a major problem for many system administrators. This is especially true of drives in high-use servers, in which drive failure is inevitable. Data can also be corrupted or destroyed because of spurious errors with disk array hardware, Fibre Channel and SCSI host bus adapters (HBAs), and network interface cards (NICs). Fluctuations in electricity, sudden power outages, and vibration and shock can damage disks and the data stored on them.
Failures in software are also a source of data loss. Updated drivers and firmware are notorious for having bugs that cause data to be erased or corrupted. The same can happen with new versions of application or database software. The failure of IT to properly back up and verify the integrity of a backup before installing new software is an age-old problem leading to irrecoverable data loss.
System failures cannot be completely prevented, but steps can be taken to reduce the likelihood of losing data when they occur. One of the most common steps is to buy high availability (HA) devices for mission-critical applications. HA units offer better protection against shock, flaky electricity, and link failures that can corrupt data. They also have software protection that ensure that I/O is complete and that bad blocks do not get written to disks. Good backup and archive procedures are also important parts of a plan to protect against system failure.

5. Connectivity: The Risk Multiplier

When networking was introduced, the risks associated with it were relatively low. Most networks were small, with only a handful of computers linked. The Internet started as a network of only four mainframes. Local-area networks (LANs) did not become widely deployed until the late 1980s. Access to these networks was very limited, and the number of assets involved was low.
As the networks grew, both in size and complexity, security problems became more prevalent, and the risk involved in using a network became higher. There were more devices of different types, with many more access points. Whereas in the past, disasters or hackers could be contained to one computer, networking allowed problems to spread throughout a large number of machines. There is now network access to more computers than at any time before. Many homes now have several linked computers and network devices, and have become susceptible to the same security and network problems that have plagued the corporate world for years.
Network Attached Storage and Storage Area Network technology have had a similar effect on storage. Data storage devices have traditionally been isolated behind a server. Secure the server, and you secure the storage as well. That is no longer the case, and storage devices are experiencing many of the same problems that other network devices do. Some people would argue that the ability to get unauthorized access to a Fibre Channel SAN is low. However, if a malicious hacker does get through system defenses, he or she now has a greater number of devices to wreak havoc on. Connectivity increases risk because it gives more access to more resources that can be damaged.
Because risk is outcome based, the outcome of a successful intrusion or data corruption in a networked storage environment can be much more devastating than with an equal number of isolated, directly connected storage devices.
Even when system security is not the issue, connectivity can magnify other problems. Previously, one server could access only a small number of storage devices. If something went wrong, and the server caused data to become corrupted, it could do so to only a small amount of data held on its local resources. Servers can now potentially access hundreds or even thousands of storage devices and can corrupt data on a scale that was not possible before.
Networked storage also has increased the complexity of the storage system, which can introduce more problems. The complexity of the storage infrastructure has increased dramatically, with switches, hubs, cables, appliances, management software, and very complicated switch-based disk array controllers. The opportunity to introduce errors into the data stream and corrupt or destroy it is much higher with so many devices in the mix.
In the networked storage environment, there are many servers and many storage devices. More servers can damage or provide unauthorized access to data. Even a single server can affect many data storage devices. The potential harm is multiplied by the high degree of connectivity that a modern storage infrastructure allows for.

6. Business Continuity: The Importance of Data Availability to Business Operations

Business continuity is the ability of a business to continue to operate in the face of disaster. All functional departments within a company are involved in business continuity. Facilities management needs to be able to provide alternative buildings for workers. Manufacturing needs to develop ways of shifting work to outsourcers, partners, or other factories to make up for lost capacity. Planning and execution of a business continuity plan is an executive-level function that takes into account all aspects of business operations.
Information technology plays a key role in maintaining operations when disaster strikes. For most modern companies to function properly, communications must be restored quickly. Phone systems and e-mail are especially important, because they are primary communications media and usually are brought online first. After that, different systems are restored, depending on the needs of the business.
Protecting data and the access to it is a primary component of business continuity strategies. Restoring systems whose data has been destroyed is useless. What is the point of restoring the financial system if all the financial data has disappeared? IT, like other departments, needs to ensure that the data entrusted to it survives. In many cases, it is less important that the hardware systems themselves survive, so long as critical data does. If the data is still intact, new hardware can be purchased, applications reloaded, and operations restored. It might be a slow process, and there will be financial ramifications, but at least the business will eventually return to normalcy. Without the data, that will never happen.

The Changing Face of Data Protection

In the past, data protection meant tape backups. Some online protection could be obtained by using RAID (which is explained in Chapter 2) to keep data intact and available in the event of a hard drive failure. Most system administrators relied on copying data to tape and then moving some of those tapes offsite. This is still the most common form of data protection, but only part of a whole suite of techniques available for safeguarding data.


Remote Data Movement and Copy


It was natural to extend the paradigm of duplicating important data on another disk (RAID) to duplicating it to another storage system, perhaps located in a different place. In this process, called remote copy, exact copies of individual blocks of data are made to a remote system. This system might be right next door or hundreds of miles away. Remote copy allows a second storage system to act as a hot backup or to be placed out of harm's way and available for the disaster-recovery site to use. At present, remote-copy systems tend to be expensive. The telecommunications needed to support them present the IT manager with a high recurring expense. The costs involved with remote copy have tended to relegate its use to high-end applications and very large companies.

Disk-Based Backup

Typically, backups consist of copying data from a disk system to a magnetic tape. Tape is, unfortunately, slow to write to, lacks the capacity that modern disks have, can be difficult to manage, and is very slow to recover data from. Because the purpose of a backup, as opposed to an archive, is to produce a copy of the data that can be restored if the primary data source is lost, slow recovery is a problem.
Because of these limitations, disk-based backups are gaining in popularity. Originally positioned as a replacement to tape, this method is seen as being part of a more sophisticated backup strategy. With disk-based backups, similar software and techniques are used as with tape, except that the target is a disk system. This technique has the advantage of being very fast relative to tape, especially for recovery. The disadvantages are that disk drives generally are not removable, and the data cannot be sent off-site the way a tape can.

Networked Storage

The biggest changes in data protection come courtesy of networked storage. In the past, storage was closely tied to individual servers. Now storage is more distributed, with many clients or servers able to access many storage units. This has been both positive and negative for data protection. On the one hand, networked storage makes certain techniques—such as remote copy, disk-based backup, and distributed data stores—much easier to implement and manage. The ability to share certain resources, such as tape libraries, allows for data protection schemes that do not disrupt operations.
However, the networked storage environment is much more complex to manage. There tend to be many more devices and paths to the data. Because one of the key advantages of networked storage is scalability, these systems tend to grow quickly. This growth can be difficult to manage, and the sheer number of devices in the storage system can be as daunting as other types of corporate networks.
Networked storage allows for multiple paths between the server or client and the data storage devices. Multiple paths work to enhance business continuity by making link failures less of a problem. There is less chance that a broken cable will cause applications and backups to fail. Overall, networked storage is more resilient. It produces an environment in which safeguarding data and recovering from failure are performed more quickly and efficiently.

Information Lifecycle Management

The future direction of data protection is in a recent concept called Information Lifecycle Management (ILM). ILM is less concerned about the underlying data than about the upper-level information. Information is data with context; that context is provided by metadata, or data about the data. ILM guides data protection by determining what type of protection should be applied to data, based on the value of the information it supports. It makes sense to spend a lot of money on remote copy for very valuable information. Other information may not be worth protecting at all. ILM helps determine which path to take in making those decisions.