How to block unwanted websites
You can try using something like parental control software like: cyber patrol parental control, Parental control bar, Safe Families We-Blocker Parental Control.
if you are browser specific and using internet explorer then you can use content advisor. This shows up in internet options and if you want to use some type of software to block sites according to category then the best option is k9 web protection.
Try to Open DNS besides filtering of sites based on your preferences it also blocks malware sites.
By using Windows Firewall applications you can be block for the users from that particular PC but no internet website. You can yourself check also go to Control Panel then go to Windows Firewall go to Exceptions and there you will find only the services that can be blocked. If you try to add new ones by clicking Add Programs right applications can be blocked for the users from that particular PC but no internet website.
The most easy and secure process to block particular websites on your client operating system is to use d windows HOST file. The steps are
go to C:\WINDOWS\system32\drivers\etc
find a file named hosts.
Open it with notepad then go to the end of d file and in a new line write particular sites full name or url.
Introduction
A digital signature serves many purposes. It doesn't just validate data, but is truly a fingerprint from the sender. The digital signature depends on the key pair, the data of the message, and a signature. All these elements are mapped to each other so that if any item changes, the signature does not pass verification. The originator of the data hashes the data with his or her private key to create a signature. A specific public key that is generated from the private key can be used to verify the signature. Verifying the signature matches the data, signature, and public key.
If the signature passes verification, it is guaranteed that the message has not changed and that the public key used came from the user who generated the signature. The exchange of the private and public key becomes the crucial point in identifying the user. A person might deny that his or her private key encrypted the message and assert that someone else sent you the public key. If the exchange can be guaranteed and the message signature is verified, it is relatively easy to prove that a specific user sent the message and hard for the user to deny sending the message.
Because of this type of verification and the need to ensure authenticity of messages for legal reasons, state legislatures and many legal organizations have been looking at digital signatures to provide a means of identifying a message. Because more and more contract agreements are being handled electronically, many organizations need a way to verify the origin of a message from a person or an organization.
With the threat of hackers and others manipulating data, it becomes difficult to prove that a message originated from a specific user. Digital signatures offer a solution. Many companies are specializing in digital signatures for these reasons. The digital signature can be combined with other protocols such as Public Key Infrastructure (PKI) and the X.509 certificate.
The RSA Security Company, which provides the RSA key exchange algorithm, was one of the first vendors to provide digital signatures. While many vendors were working with RSA on a digital signature, the National Institute of Standards and Technology (NIST) decided to provide a standard for the community. The NIST published a proposed Federal Information Processing Standard (FIPS) for the Digital Signature Standard (DSS) in August 1991.
At the time of its conception, the DSS received a lot of criticism from RSA, simply because RSA was already trying to submit its own proposal for a digital signature standard using the RSA key exchange. The NIST did not adopt the RSA key exchange but instead introduced a new keying algorithm that was part of the Digital Signature Algorithm (DSA). A lot of companies at the time, such as IBM and Sun, invested a lot of time and money into the RSA Digital Signature Algorithm. The NIST received a lot of criticism because during this time, in January 1992, the NIST also invented the SHA-1 message digest to support the DSS. The criticism came from the fact that the SHA-1 was based on the MD4 algorithm, and now it seemed as though the DSS was being based on some of the work from RSA.
More criticism came from the fact that the National Security Agency (NSA) also worked with NIST to develop the algorithm, that it was a suggested public standard, and that the original key length was 512 bits. As with many other collaborations from the NSA, many organizations felt that the NSA had a crack for the algorithm so that it could maintain surveillance of data. There is still a lot of paranoia that the use of a public key and public variables can be used in combination to break the signature. Because some of the key exchanges, such as the RSA and Elliptic Curve key exchange, appear stronger, the NIST has added these algorithms to the DSS.
Even though there is paranoia surrounding some of the history of DSS, the DSS provides a global standard for exchanging signatures and for how the DSA works. Because of the DSS, many protocols such as X.509, Privacy Enhanced Email (PEM), and Pretty Good Privacy (PGP) have evolved. The digital signature, like so many other aspects of security, is not an entity unto itself but is used with several other building blocks of security.
The digital signature provides an organization the capability to protect itself with the combination of all the other building blocks.
Understanding the Digital Signature Algorithm (DSA)
The Digital Signature Algorithm is the algorithm specified for the DSS. The DSA's objective is to provide a signature and verify it in the form of two variables, r and s. A private key is needed in order to sign the data. In order to verify the signature, the public key is needed. There are generally three major steps that must be accomplished before signing data or verifying the signature:
- The private and public keys must be available.
- The parameters must be initialized (for DSA, they are called DSA parameters).
- Before signing and verifying, the data must be passed through the update methods.
The first step in the algorithm is to initialize the DSA parameters p, g, and q along with the private key x and the public key y. The public variables p, q, and g are needed to compute and verify the signature. These variables are used to help compute the hash value with the private key and to verify the hash value with the public key. If the computation for verification does not work with the public variables p, q, and g, the public key y, the signature variables r and s, or the integer value representing the hash, the data is non-trustworthy.
The public variables p, q, and g are considered public because they can be distributed to an entire group without compromising the algorithm. The signature variables r and s need the public variables plus the hash value and a randomly generated k value that is generated specifically for every signature. The private variable k must always be newly generated for each signature, and the hash value is taken from the data that is updated for the SHA-1 digest.
There are some things that must be emphasized. One thing to note is that there are many papers, algorithms, and specifications that list algorithms for probabilistic primality testing - used to generate and test for a prime number. The java.math.BigInteger class uses the isProbablePrime method to test the number for its primeness to a degree of specified certainty. The java.security.SecureRandom class will also generate a prime number guaranteed to the specified primeness. The higher the degree of certainty, say 90%, the longer the algorithm takes to ensure the number is prime. Many encryption algorithms require prime numbers or the algorithms will not work.
The p, g and q must satisfy the following requirements:
- The public variables p and q must be prime numbers.
- The q number is a randomly generated prime divisor, and p is its associated prime modulus.
- The p must be a number with the length between 512 and 1024 bits. Most algorithms will denote the length by the letter l.
- The q must fall between 2159 and 2160.
The p and q numbers must have an association in which they can generate an integer g in the form g = h (p-1)/q mod p. The variable h is normally a randomly generated integer between 0 and p - 1.
The sample program tries many combinations until the numbers can fit into the preceding restrictions. Many algorithms generate the public variables when the keys are generated and pass them through the public and private key classes. After the public variables are generated, the public and private keys can be generated. The private key, represented by x, is a randomly generated integer greater than 0 and less than q. The public key y is generated by the equation
y = gx mod p
The public key is a product of the private key and forms a relationship where only x can be associated with y and vice versa. After the keys and public variables are formed, a message can be signed. The verification and signing of the message cannot be accomplished without most of these variables. The verification requires the public variables and the public key y.
The signing of the message requires the public variables and the private key x. The message signature and verification are dependent on one other variable besides the public variables and keys, and that is the message itself. The signature is an r and s variable. The r variable is calculated from the equation
r = (gk mod p) mod q
These are the public variables and a randomly generated integer k. The s variable is calculated by the using the message hash with the SHA-1 message digest in SHA(M) and the private key x as in the following equation:
s = (k-1 (SHA(M) + xr)) mod q
The verification of the message fails without the correct private key and the correct message digest. The goal of the signature verification is to generate a verification value represented by v and to compare it to the variable r of the signature. If they are not equal, the verification fails.
The variable v does not need the private variable k and the private key that were used to generate the digital signature. The associated public key and the message digest, generated again by the SHA-1 algorithm with the message, are the only new variables needed. The calculation of v is quite lengthy and is broken down into multiple steps.
The first step is to compute the variable w, which is from the signature s variable from the equation w = (s) -1 mod q. The next step is to use the message digest to calculate the u1 variable in the equation u1 = ((SHA(M)) w) mod q, which is used to verify the message digest.
The next equation is used to add the signature r in the calculation to produce the u2 variable with the equation u2 = (( r )w)mod q. Finally, all these calculations, including the public variables and the public key, are used to calculate the v variable with the equation from the previous steps with v = (((g)u1 (y)u2)mod p)mod q.
The variable v is checked against the signature variable r, and they should be equal unless something has been altered or used incorrectly, such as the associated public key. If the variable v matches the variable r, the correct digest and public key were used in the calculation. If the correct public key was used in the calculation, the matching private key was used to generate the signature. By knowing that the public key matches a specific user's private key, there is a guarantee that the message came from that user.
If the message digest computed from the data checks - that is, it validated - there is a guarantee that the data has not changed. The only piece missing from DSA is the guarantee that the key came from a specific user, but that is the purpose of the key exchange. The DSS may embed the public key in a message, such as in Pretty Good Privacy (PGP) or many other means.
Getting the RSA Digital Signature Algorithm
The latest FIPS186-2 now lists the RSA digital signature (RSA ds) as one of the three recommended algorithms for digital signatures. The FIPS186-2 simply says to see the ANSI x9.31 for documentation.
Recall from the key agreement algorithm that RSA had three public variables called p, q, and the modulus n. The public key is represented by {n,e} and the private key is {n,d}. If the private key {n,d} is not available, it will have to be computed from the p, q, dP, dQ, and qInv variables with the Chinese Remainder Theorem (CRT) key.
If the CRT key is used, the variable s can be generated from the following equations:
• s2 = mdQ mod q.
• h = qInv(s1 - s2) mod q.
• s = s2 + hq.
When the signature is generated, a digest is computed for the data and returned as the variable m. The signature s is computed from the following equation:
s = md mod n
To verify the message, the algorithm will need the public key {n,e}, the capability to recompute the same digest from the data as m, and the signature s. The message digest is recomputed as the test variable a. The b variable will be generated using the following equation from the signature and the public key:
b = se mod n
The variables a and b match if the signature, keys, and data are valid. The a value is computed as the integer returned as the message digest. Unlike the DSS algorithm, the RSA algorithm may use the MD2, MD4, MD5, or SHA-1 digest. In order to account for the possibility of different message digests, the message digest algorithm identifier is returned as part of the signature information block.
Other variables that are included in the format of the signature are block type, encryption-block formatting, and a padding block. RSA not only has a key algorithm and signature algorithm, but also an encryption algorithm. Since RSA includes an encryption algorithm, the signature block may also be encrypted with the RSA cipher.
In order to format the signature block, the PKCS#1 includes a padding string and algorithm to ensure the correct format size when hashing and using the RSA encryption.
Understanding the Elliptic Curve Digital Signature Algorithm
The ECDSA generates a signature with a private key and verifies the signature with a public key. The ECDSA starts by selecting an integer value k to be multiplied by a Point P = (x1, y1) along the elliptic curve. Just like the DSS algorithm, an r and s variable is calculated and saved for the signature. Since x1 is an integer point on the x coordinate system, r can be computed by the following equation:
r = x1 mod n
The variable s is calculated by the SHA-1 hash on the message represented by h(m) and the private key d. The s equation becomes
s = k-1 (h(m) + dr) mod n
The signature is the r and s variable. To verify the signature, the ECDSA needs the r, s, the message for the digest, and the public key (E, P, n, Q). The verify method is initialized with the public key. Then the message needs to be passed through the algorithm's update method to store the hashed message. The verify method is then called, passing in the signature containing the r and s. The verify method hashes the message and converts the digest into an integer m. Just like the DSA algorithm the ESDSA contains multiple calculations. The next variable after m to be calculated is the variable w, which is calculated using the equation
w = s -1 mod n
The next two variables are u1 and u2 calculated by
u1 = mw mod n and u2 is u2 = rw mod n
The point along the elliptic curve is computed from these calculations. Taking the public key variables P and Q, the additive property is used from u1 and u2 to form u1P + u2Q. The point from u1P + u2Q is computed in the x-y coordinate system as the point (x0, y0). The x0 coordinate along the x-axis is used to find the result in v = x0 mod n. If the variable v in the verify method equals the variable r in the signature generator, the message and keys used are valid.
Some of the equations between the ECDSA and DSA appear similar because both algorithms are based on the ElGamal signature algorithm by signing the equation
s = k-1 {h(m) + dr} mod m
Both the ECDSA and DSA use the SHA-1 message digest as the defined digest to use. The ECDSA uses public variables from the public key (E, P, n, Q) that are used to compute the intermediate variables w, u1, and u2. Looking at DSA, similar calculations were accomplished using public variables p, q, and g. These values are needed to produce the algorithms without the private key. Both of these algorithms share the complexity of trying to generate the checks with just the public variables. The biggest difference that exists between these two algorithms is the type of equations. The ECDSA is elliptical and uses geometric properties. In other words, checks consist of checking if points fall on a curve as opposed to checking if the values are greater than 0 and less than q, as in the DSA.
The DSA is easier from a computational standpoint in that numbers can be checked to be less than, greater than, or equal to. The computational complexity of ECDSA makes the algorithms more secure. Also, because the numbers that can be used in ECDSA are limited to the points along a curve (versus the DSA using prime numbers that must fit together), the ECDSA is computationally faster.
Implementing the Digital Signature Algorithm (DSA)
The JDK 1.4 supports only the DSA and RSA out of the box for providing service providers. The JDK 1.4 is easily extensible to add more service providers from other vendors or to build a custom one for algorithms such as the ECDSA. The ECDSA is a very popular algorithm, and there are service providers for the algorithm from companies such as Certicom and Cryptix. For many organizations, using the DSA and RSA signatures is sufficient.
The first step in using the JDK 1.4 framework is to generate a key pair using the java.security.KeyPairGenerator class's getInstance method. By passing in the variable DSA, a key pair for the DSA algorithm is created. If RSA is passed in as the parameter, a pair of RSA keys is generated. Likewise, the java.security.Signature class's getInstance method initializes the signature class by passing in a SHA1withDSA parameter. To generate RSA signatures, the MD5withRSA, MD2withRSA, and SHA1withRSA parameters can be used depending on which message digest needs to be implemented. Whether the signature is being used for generating the signature or verifying the signature will determine which key, public or private, will be initialized into the algorithm.
All signature algorithms require that the data be passed through the update method for both verifying and generating the signature. After the message data has been updated, a signature can be returned from the sign method, or a signature can be verified by passing it in the verify method.
One of the features found in the JDK 1.4 is the java.security.SignedObject class. The SignedObject class is created with a digital signature passed into it and a serialized object. The purpose of the SignedObject class is to protect the runtime object with an associated signature. If the integrity of the object is compromised, then the signature detects it and an exception is thrown. The SignedObject provides a deep copy of the serialized object so a digest can be created and the integrity checked. Features like this one and the capability to use a DSA and RSA signature out of the box make Java a powerful programming language.
The essence of the digital signature is to provide a key pair that can verify a digest and generate a signature for verification. If the data, signature or key doesn't match, then the message is corrupted. The public key that can validate the message can only come from the specific person who signed the message using his or her private key. If a different public key is used, then the signature will not verify. The signature on the message, in most cases, will be unique and can easily verify whether changes have been made to the message in transit. Because the validation on the sender's private key, data, and signature can be accomplished, it is normally assumed that it can be guaranteed that the message came from a specific user.
Ensuring data integrity is a very important part of computer science, from the Web page to software that is being ordered off the Web. If files are not periodically checked, then viruses and Trojans can be inserted into the data without the organization's being aware of it. This can be very serious because if a customer buys the virused software from the organization, he or she may not be a repeat customer; also, the media reports stating that the software is virused could shut down an organization.
Another scenario where corrupt data could affect a company is in its communication with third-party vendors. For example, if an online mortgage company that looks for the best rates from banks receives a rate from one bank that is more than 200% higher than the others, a hacker might have altered the message. It is obvious that that bank will not be getting the business.
Protocols and software can be used to prevent thesesituations. The basic strategy to assist an organization is to establish security requirements, a policy for software, and steps to accommodate the plan. An organization should dedicate resources specifically for handling security issues and employ software architects and developers to specifically ensure the security of the organization. Many organizations are focused on getting the product finished, but if the product is deployed on an insecure framework, the entire product is compromised. Hackers spend their time knowing the market, and there have been many cases where a product was damaged before it reached the market.
Understanding the Hash Function
The secure hash is an algorithm that takes a stream of data and creates a fixed-length digest from it. The digest is a fingerprint of the data. No message digest is perfect, but theoretically it should have a low collision rate, if any, and be a quick, secure algorithm that provides a unique fingerprint for every message. If even one single bit of data is changed in the message, the digest should change as well.
Notice, however, that there is a very remote probability that two different arbitrary messages can have the same fingerprint. When two or more messages can have the same fingerprint, it is known as a collision. When the same exact message is hashed twice, it should generate the same digest. These are just some of the requirements that the hash function is based on and they should be the criteria for which hash algorithm to choose.
The hash functions will generally fall into three types of algorithms based on their uses. There are hashes that don't require a key, those that require a secret key, and those that require a key pair. The algorithms that don't require a key are known as message digests. Those algorithms that require a secret key are known as message authentication codes, and those that require a key pair are known as digital signatures.
Understanding the Message Digest
A message digest (MD) is an algorithm that uses a hash function to create a digest. The digest is simply the fingerprint of the original message. The digest is used to validate that the message has not been altered. In order to check the integrity of a digest, it must be compared against the original digest, which must be trusted by the receiver as being untampered with. For instance, if the message is M, and a message digest is used (MD), a digest (D) is produced. This is illustrated in the following equation.
MD1(M)1 = D1
When the message needs to be validated again at a later time, the message is hashed to a new digest. If any data is changed in the message, even by one bit, the message digest must produce a different digest as illustrated in the next equation:
MD2(M)2 = D2
Now the two digests are compared, and if there is a difference between the digests, D2 is considered invalid or altered.
Note In the D1 and D2 comparison, D1 must be trusted by the receiver as being the original digest and so it is up to the organization to keep it safe. One suggestion is to put D1 in an LDAP server.
Encryption and digests
Another use of the digest is that it is encrypted in a message such as SSL or X.509 to be unencrypted by a public key and checked for corruption of the data. Since the private key is needed to encrypt the digest, only the owner of the private key can generate the digest. The owner of the private key is usually the initiator of the message, so this scenario works well. Any user that has a copy can decrypt the message, but cannot encrypt the message without a private key. This private-public key scenario is an example of a key pair.
If you are familiar with Serial Communications and TCP/IP, this type of message integrity check may look familiar. In TCP/IP, there is a Cyclic Redundancy Code (CRC) to ensure that the receiver received the message in its entirety. If the receiver calculates the CRC and it doesn't match the message, the TCP/IP packet is retransmitted. The CRC code uses a 12-bit, 16-bit, or 32-bit CRC size. First, the CRC uses a polynomial calculation to sum the bits in the message into the desired bit-size CRC digest. Then, the CRC is used to detect errors in a transmission. The idea of using a digest for messages has been around for quite some time in other protocols; the algorithms have evolved over time.
Many algorithms can be used for checking the message digest, such as MD2, MD4, MD5, SHA-0, SHA-1, RIPEMD-160, Tiger, and many more. When testing the message, the tester must be aware of the algorithm that is being used. If the digest was hashed using MD5 and the message to be validated was hashed using SHA-1, then the digests is different even if the messages are the same. An organization needs to establish standards for which algorithms it uses for the MD.
Differentiating MDs
Many characteristics are used to differentiate MDs. Each MD usually has an initialization registers set of four or five values that will be the first values used in the hash. The registers were originally optimized for 32-bit processing machines and are the values that will initialize the registers. The initialization values are important to ensure that the input data is not the firstof the initialization variables, so that even less can be known about the input data. When the algorithm is initialized, buffers need to be zeroed out. When the digest is returned, the algorithm needs to be initialized again to start a new digest. Many algorithms use temporary buffers and have the capability to add input data through an update method.
One of the characteristics of the message digest is referred to as a one-way hash. A one-way hash means that the input data cannot be recovered by looking at the digest or hash. After the initialization of the message, data can be inputted for the algorithm to compute. The data must not exceed the message digest's maximum size. The message digest breaks down the input data into blocks. Most algorithms use a 512-bit block size, but the block size is algorithm-specific. If the data input is smaller than the block size, the algorithm must pad the data to reach the correct block size. Lengths are added inside many of the blocks to contain the length of the original message. After the input data is entered and formatted to the correct block size, each block will go through the algorithm's computations.
Breaking down the algorithm
The algorithm is normally broken down into rounds and operations. The rounds are a set of like operations performed on the data block. For example, SHA-1 has four rounds, and each round has 20 steps. The step is the number of times that the data is transformed. A round is the number of completely different transformations on the data. After the data has been hashed upon, the result needs to be compressed into a digest. The compression will take the 512-bit block and put it into a 160-bit digest in SHA-1; other algorithms have different sizes. An example of the padding, initialization, and updates for SHA-1. Many of the message digests have different values, different operations in the computation, and several other factors; but the basic flow remains the same.
The initial variables in the five registers in SHA-1 are variables to initialize the chaining variables. The initial variables are hashed with the input message block. The result of the hash is used as initial variables in the next input message block that will be hashed. Then the result of that hash is used next as chaining variables, and this process continues until the final phase is called by the application to change the hash into the hash digest. The hash in SHA-1 has five integer registers until the final phase, and when the entering the final phase, the hash is converted to 20 bytes.
The general steps of a message digest algorithm can be described as:
Step 1: Initialization.
Step 2: Break the data input into the appropriate block size, padding if necessary.
Step 3: Append the length.
Step 4: Pass each block through the algorithm's rounds and operations.
Step 5: Compress to digest the data.
Implementing the Different Message Digest Algorithms in Java
To understand which message digests are supported in Java, you can get a listing of the properties of the service providers.
The update method of the MessageDigest class adds the input data to the algorithm. Multiple updates can be done to the message digest to be hashed. The final phase will not complete until the digest method is executed. The variable chain starts at the point of the getInstance or digest method and ends at the next digest method. What this means is that if the program does a getInstance, then does several updates, and finally a digest call, the digest is the total on all messages passed through the updates.
When the digest method is called, the variables and buffers will be reset to an initial state so that a new digest can start from that point on. Being able to provide multiple updates is one of the features that Java provides in addition to abstracting the algorithms. Another feature worth noting is the java.security.
DigestInputStream class, which associates a message digest with an input stream. When data is read into the input stream, it is sent directly to the update of the message digest. Classes such as the DigestInputStream can alleviate several method calls that would be required to read and call the updates.
This article discussed the use of the message digest for ensuring message integrity. There are many message digest algorithms that can be used and more are evolving every day. Most of the modern-day algorithms are based on Ron Rivest's MD4. Some of the algorithms such as RIPEMD-160 have become much more complex in the computations of the hash, which means that the execution time of the algorithm is higher.
The algorithm that Ron Rivest designed after MD4 was MD5, the successor of MD4 because of MD4 collisions. Collisions occur when multiple messages can generate the same digest. MD5 is much faster than RIPEMD-160 but can also generate some collisions because the algorithm is not as complex as RIPEMD-160. In the middle of MD5 and RIPEMD-160 is SHA-1, which is faster than RIPEMD-160 but slower than MD5 because of its computational power. So there are several choices for the message digest algorithm. The algorithms supported by the Sun JDK 1.4 are MD5 and SHA-1.
The network services that bind to TCP ports provide direct access to the host system. If the service provides access to the hard drive, then any remote user has the potential to access the hard drive. Whereas network protocols such as IP and IPv6 provide the means to reach a remote host, TCP provides a port into the system. By identifying the type of system and type of service, an attacker can select appropriate attack vectors.
Operating System Profiling -
Most TCP implementations allow parameter customization for optimizing connections. Systems may specify larger window sizes, define more retries, or include specific TCP options such as timestamps [RFC793]. The default selections of these values are operating system specific. Windows does not use the same default settings as Linux or Cisco. Moreover, some settings are very specific; in some cases, these can identify specific operating system versions and patch levels.
Initial Window Size -
Different operating systems use different initial window sizes. Although the initial value can be modified, most systems use the default value. When the server (or client) receives the initial window size, it can use this information to identify the type of operating system that transmitted the data. For example, Windows 2000 uses an initial window size of 16,384 bytes, Windows XP specifies 64,240 bytes, and Debian Linux defaults to 5,840 bytes (1,460 bytes with a scale value of 22). If the initial window size from a TCP connection specifies 16,384 bytes, then the sending system is potentially running Windows 2000 and not Debian or Windows XP. As TCP conversations continue, the window size usually increases. This results in improved performance from established connections. Active connections may have very large window sizes. Larger windows yield lower overhead from individual acknowledgements. As with the initial window size, the amount that the window increases is also operating system specific. Either end of the conversation, or an observer along the network path, can use the initial window size and increment information to fingerprint the operating systems involved.
TCP Options -
Each TCP packet can contain optional TCP header values, including window scaling, maximum segment size, SACK support, and timestamp information. Different operating systems support different option selections, values, and ordering. A default RedHat Linux 7.1 system (2.4 kernel) includes five options: a maximum segment size of 1,460 bytes, SACK support, timestamp information, a no-operation (NOP), and a window scale of zero. Debian Linux 3.1 (2.6 kernel) includes the same options but with a window scale of two. In contrast, Windows XP includes nine options: maximum segment of 1,460 bytes, NOP, window scale of zero, NOP, NOP, timestamp, NOP, NOP, and SACK support. An SMC 7004 Barricade router only includes one option: specifying the maximum segment. By observing the initial TCP options, values, and ordering, specific operating systems can be identified. In some cases, the TCP options can be unique enough to identify the operating system as well as the patch level. Knowing the patch level of a system greatly assists an attacker because it identifies unpatched vulnerabilities.
Sequence Numbering -
Although all systems that implement TCP increment sequence numbers the same way, the initial sequence number is operating system specific. The initial SYN and SYN-ACK packets exchange the starting sequence numbers for the connection. Although a single TCP connection cannot disclose identifiable information, a series of rapid connections can disclose the pattern used to establish the initial connection. Older operating systems, such as Windows 95, Windows 98, and OS/2, and embedded systems (e.g., VxWorks) linearly increment each new sequence number.
A series of SYN requests will be met with a series of SYN-ACK replies that contain sequential set of numbers. For example, each SYN-ACK reply from OS/2 version 3.0 increases the initial sequence number by 64,000. The D-Link DI-604 home router increases the sequence based on the current time. Linux systems use positive incrementing sequence numbers, but the amount of each increment is not linear. In contrast, most BSD systems use very random initial increment values. As with the initial window size and TCP options, sequence numbering can be used to identify operating system, version, and patch-level version information.
Client Port Numbering -
Although servers are bound to a fixed TCP port number, clients choose any available port number for use with the connection. The server’s port number must be fixed so the client knows where to connect. But, the server can determine the client’s dynamic port number from the TCP header. Repeated connections from the client to one or more servers will show different port numbers for each connection.
Different operating systems use different dynamic, or ephemeral, port ranges for selection by the client. Ports 0 to 1023 are usually reserved for well-known services. Even if a server is not using one of these port numbers, clients will not normally use them for outbound connections. Similarly, TCP ports 49,152 to 65,535 are usually reserved for private ports. Different operating systems use different subsets of the remaining range for ephemeral ports. For example, Red Hat Linux 7.1 defaults to the range 1024 to 4999. Ubuntu Linux 5.04 uses the range 32,768 to 61,000. The Linux command sysctl net.ipv4.ip_local_port_range displays the ephemeral port range. Under FreeBSD and Mac OS X, the command is sysctl –a | grep portrange. By observing the ephemeral port range used by a client, the type of operating system can be narrowed down, and in some cases, uniquely identified.
Retries -
When a TCP packet does not receive an acknowledgement, the packet is resent. The number of retries and duration between retries is operating system specific. Fingerprinting based on retries can be done in several ways:
SYN retries: Without responding, count the number of SYN packets and the duration between packets. Most Windows systems transmit three SYN packets, 3 seconds apart before giving up. Linux defaults to five, but the duration progressively expands—the first two are 3 seconds apart, then 6 seconds, 12 seconds, and so on.
SYN-ACK retries: A client can connect to a server (generating a SYN) and observe the number of SYN-ACK replies.
ACK retries: After establishing the connection, the system can fail to provide an ACK. The number of observed retries from an established connection is generally more than from SYN or SYN-ACK retries.
Profiling Tools -
There are a few popular tools for profiling systems, including Snacktime, p0f, and Nmap:
Snacktime: This open source tool fingerprints hosts based on TCP window sizes, options, and retry durations. It only needs to query one open TCP port to fingerprint a system. This tool is included on the CD-ROM.
p0f and Nmap: These tools query two ports to detect subtle changes in the server’s TCP configuration. In this technique, one port must be open and another port must be closed. Besides determining the operating system type, these tools can also identify how long the system has been running.
To better study the people who perform system profiling, the Honeynet Project offers a tool called honeyd (http://www.honeyd.org/). This tool creates virtual online systems for use as honeypots—systems used to monitor malicious activity. Honeyd can impersonate most operating systems. Tools such as Nmap and p0f cannot distinguish a real Windows NT 4 SP3 system from a virtual one.
Honeyd does have a few limitations. Although it does impersonate the internal stack, it does not impersonate the default ephemeral port ranges, TCP option ordering, or retry durations. Although Nmap may identify a Linux honeyd system as “Windows NT 4 SP3,” Snacktime may detect discrepancies in the results.
Anti-Profiling Options -
Profiling is a useful diagnostic technique, but it can also be used for reconnaissance prior to an attack. An attacker can use network profiling to identify underlying operating systems and patch levels. For example, an attacker who identifies a FreeBSD operating system will likely decide against trying a Windows-specific exploit and select a FreeBSD exploit. By changing the default window size, retry timeouts, TCP options, and ephemeral port range, a system can alter its appearance. A Windows XP system that looks like a Debian Linux system may face fewer Windows-specific attacks. Because changing a system’s default TCP settings is uncommon, attackers are likely to trust misinformation derived from reconnaissance.
Most network viruses blindly attempt exploits against all network addresses. Changing the system’s profile will not mitigate these attacks. Directed attacks based on system profiling can be misled, however, resulting in fewer profile-specific attacks. TCP ports provide an entrance into the system. Many network viruses scan for well-known server ports. If an open port is found on a host, then an exploit is attempted.
To scan large networks, most viruses limit the number of TCP SYN retries— sending one or two before moving on. Although uncommon, TCP servers may use a simple knock-knock protocol to limit the impact from virus scans. Rather than acknowledging the first SYN packet, the server may wait for the third. Although this increases the time for an initial TCP connection, skipping the first two SYN packets decreases the chances of detection by automated reconnaissance.
Port Scans -
TCP port scans are used to identify running services. A port scan attempts to connect to ports and records the results. In general, there are four types of replies to any connection attempts:
SYN-ACK: If a service is running on the port, then a SYN-ACK will be returned to the client. This is a positive identification. To prevent detection, some firewalls always return a SYN-ACK, even if no service is available. As a result of this countermeasure, a scanner cannot identify open ports.
RST: If no service is running, many systems return an RST packet. This provides a quick confirmation that there is no service on the port.
ICMP Unreachable: If the host is unreachable, then an ICMP packet may be returned indicating a failure. This leaves the port state unknown because it could not be reached for testing. Firewalls, such as lokkit and iptables used by RedHat Linux, return an ICMP Unreachable instead of a TCP RST packet to confuse scanners.
Nothing: If the packet fails to reach the host, there may be no reply at all. SYN requests will timeout without a SYN-ACK. Although this usually means that the host is unreachable or offline, some firewalls and operating systems intentionally ignore packets sent to closed ports. For example, OpenBSD and Hewlett-Packard’s Virtual Vault do not reply to connections against closed ports. This prevents a remote client from distinguishing a closed port from an unreachable host.
Port scans can either be full or partial. A full port scan completes the entire three-way handshake. In contrast, a partial port scan only waits for the SYN-ACK. Although a full port scan can identify the type of service on an open port, a partial scan identifies only that a service exists.
Logging -
Logging is important for detecting system scans and network attacks. Many network services log connections, including timestamps, client network addresses, and related connection information. Few systems log raw TCP traffic. Instead, higher OSI layers usually perform logging. Higher layers do not log TCP connections until the handshake completes. This is important because, with TCP, the connection is not complete until the full three-way handshake is performed. As a result, partial port scans—where the full handshakes are not completed—are usually not logged.
Network monitoring tools, such as IDS and IPS, commonly monitor and log SYN requests as well as any traffic not included as part of an established connection. Just as SYN packets are recorded, unsolicited ACK and RST packets are also logged. Based on the frequency, type, and order of these packets, network scans from tools such as Nmap and p0f can be identified. In the case of an IPS, reactive steps can be taken before the scan completes to prevent too much information leakage and to limit service detection. If an attacker cannot identify the type of system, then his ability to compromise the system greatly diminishes.
The physical world and the digital world have many similarities when it comes to security processes. The need for authentication, authorization, confidentiality, and integrity do not change from the physical world to the digital one. They do, however, change in execution through digital means and medium. For instance, the authentication of a person cannot always be done through physical recognition since the person could be across the world sitting in front of a computer. In such a case, the authentication process must be through digital means. Instead of identification cards and drivers' licenses, certificates with the user's information must be used. The certificate is a form of credential, a digital form similar to a driver's license. Another form of credential is the password used when a person logs in to a Web site.
Once the identity has been matched with a credential and accepted by an organization's system, authentication is achieved. The authorization process requires a lookup of the permission set and digital identification to see if the user has access to a resource.
In order to achieve confidentiality, the system can use the user's key for encryption and decryption. A secret key is a single key that can be used for both encryption and decryption. A key acts as a digital token for allowing data to be read by users who only have access to the secret key. To check the integrity of the information, the system hashes the information into a new hashed information block. The hashed information block is a smaller block of information that uniquely represents the original information. When the information must be checked, the hash block is created again and the two blocks are compared. If the blocks match, the system concludes that the information has not been modified.
The digital processes are merely personal security techniques applied to the digital world. The physical world simply does not apply anymore, except in the case of isolation, which is the process of physically isolating the systems from digital access to protect the systems.
Security is ever-evolving and dynamic; therefore, an enterprise's security architecture must be flexible and agile enough to change as the times and security requirements change. There is one concept that is constant in computer science: It is ever-evolving. At one time, someone was writing x86 assembler, and now they write JSPs and EJBs. Some of the concepts have remained the same; however, technology has changed. An organization's architecture must be designed so that one year it can use Kerberos and the next X.509 certificates with minimal change.
The endpoints of the organization must be constantly monitored to support security. It doesn't do much good if the Web site has a lot of security on a server sitting on a Windows NT machine accessed across the Internet (and open to the world). The network engineers should always be aware of which machines are open and which machines are not and make sure that the only way to pass into secure information is through proper security mechanisms.
The organization that wants to establish security needs to define security requirements, such as identifying which resources are sensitive. For example, the needs of a government and a non-profit organization could be very different. Therefore, the requirements are based on the type of organization, and a security policy is established to define how to enforce these requirements. The security policy governs and dictates the standards, procedures, and practices for the organization. The practices will elicit security rule sets for any resource that should be secure.
It is best to assign a security advisor to keep a running list of administrative usernames and passwords so that, if access is lost to the system, it can be recovered by logging in as the administrator. A plan needs to be devised that regulates, tests, maintains, and updates the security system at regular intervals.
More Articles …
Page 9 of 24